National Library of Energy BETA

Sample records for guido bartels ibm

  1. Guido DeHoratiis

    Broader source: Energy.gov [DOE]

    Guido DeHoratiis is the Associate Deputy Assistant Secretary, Office of Oil and Natural Gas, in the Department of Energy's Office of Fossil Energy.  In this position, he is responsible for...

  2. Niek Lopes Cardozo Guido Lange Gert-Jan Kramer (Shell Global...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    fusion power. Niek Lopes Cardozo Guido Lange Gert-Jan Kramer (Shell Global Solutions). Lopes Cardozo, Lange, Kramer; Why we have solar cells but not yet nuclear fusion What is ...

  3. IBM References | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Feedback Form IBM References Contents IBM Redbooks A2 Processor Manual QPX Vector Instruction Set Architecture XL Compiler Documentation MASS Documentation Back to top IBM...

  4. IBM Presentation Template Full Version

    Annual Energy Outlook [U.S. Energy Information Administration (EIA)]

    ... Residential and Small Commercial Energy Customers 22% 21% 31% 26% Sample Size 5084 2010 IBM Corporation 7 7 IBM Confidential DRAFT In home technology will be one way to ...

  5. IBM era: 1960-64

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    great challenge reminiscent of the one faced by the Manhattan Project." Director Charles McMillan The Stretch, IBM's first transistorized computer: meeting Lab's growing computing...

  6. Electricity Advisory Committee

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    5, 2012 Electricity Advisory Committee 2012 Membership Roster Richard Cowart Regulatory Assistance Project CHAIR Irwin Popowsky Pennsylvania Consumer Advocate VICE CHAIR William Ball Southern Company Guido Bartels IBM Rick Bowen Alcoa Merwin Brown California Institute for Energy and Environment Ralph Cavanagh Natural Resources Defense Council The Honorable Paul Centolella Public Utilities Commission of Ohio David Crane NRG Energy, Inc. The Honorable Robert Curry New York State Public Service

  7. V-178: IBM Data Studio Web Console Java Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    IBM Data Studio Web Console uses the IBM Java Runtime Environment (JRE) and might be affected by vulnerabilities in the IBM JRE

  8. T-686: IBM Tivoli Integrated Portal Java Double Literal Denial...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    this November 2011 IBM Downloads Addthis Related Articles V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities T-694: IBM Tivoli Federated Identity...

  9. V-161: IBM Maximo Asset Management Products Java Multiple Vulnerabilit...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Articles U-179: IBM Java 7 Multiple Vulnerabilities V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities V-094: IBM Multiple Products Multiple...

  10. International Business Machines Corp IBM | Open Energy Information

    Open Energy Info (EERE)

    Business Machines Corp IBM Jump to: navigation, search Name: International Business Machines Corp (IBM) Place: Armonk, New York Zip: 10504 Sector: Services Product: IBM is a...

  11. IBM's New Flat Panel Displays

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    by J. Stöhr (SSRL), M. Samant (IBM), J. Lüning (SSRL) Today's laptop computers utilize flat panel displays where the light transmission from the back to the front of the display is modulated by orientation changes in liquid crystal (LC) molecules. Details are discussed in Ref. 2 below. One of the key steps in the manufacture of the displays is the alignment of the LC molecules in the display. Today this is done by mechanical rubbing of two polymer surfaces and then sandwiching the LC between

  12. IBM Probes Material Capabilities at the ALS

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM Probes Material Capabilities at the ALS IBM Probes Material Capabilities at the ALS Print Wednesday, 12 February 2014 11:05 Vanadium dioxide, one of the few known materials that acts like an insulator at low temperatures but like a metal at warmer temperatures, is a somewhat futuristic material that could yield faster and much more energy-efficient electronic devices. Researchers from IBM's forward-thinking Spintronic Science and Applications Center (SpinAps) recently used the ALS to gain

  13. IBM Probes Material Capabilities at the ALS

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM Probes Material Capabilities at the ALS IBM Probes Material Capabilities at the ALS Print Wednesday, 12 February 2014 11:05 Vanadium dioxide, one of the few known materials that acts like an insulator at low temperatures but like a metal at warmer temperatures, is a somewhat futuristic material that could yield faster and much more energy-efficient electronic devices. Researchers from IBM's forward-thinking Spintronic Science and Applications Center (SpinAps) recently used the ALS to gain

  14. IBM Probes Material Capabilities at the ALS

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Researchers from IBM's forward-thinking Spintronic Science and Applications Center (SpinAps) recently used the ALS to gain greater insight into vanadium dioxide's unusual phase ...

  15. V-119: IBM Security AppScan Enterprise Multiple Vulnerabilities...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    9: IBM Security AppScan Enterprise Multiple Vulnerabilities V-119: IBM Security AppScan Enterprise Multiple Vulnerabilities March 26, 2013 - 12:56am Addthis PROBLEM: IBM Security...

  16. Integrated Building Management System (IBMS)

    SciTech Connect (OSTI)

    Anita Lewis

    2012-07-01

    This project provides a combination of software and services that more easily and cost-effectively help to achieve optimized building performance and energy efficiency. Featuring an open-platform, cloud- hosted application suite and an intuitive user experience, this solution simplifies a traditionally very complex process by collecting data from disparate building systems and creating a single, integrated view of building and system performance. The Fault Detection and Diagnostics algorithms developed within the IBMS have been designed and tested as an integrated component of the control algorithms running the equipment being monitored. The algorithms identify the normal control behaviors of the equipment without interfering with the equipment control sequences. The algorithms also work without interfering with any cooperative control sequences operating between different pieces of equipment or building systems. In this manner the FDD algorithms create an integrated building management system.

  17. U-181: IBM WebSphere Application Server Information Disclosure...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Execution Vulnerability U-272: IBM WebSphere Commerce User Information Disclosure Vulnerability T-722: IBM WebSphere Commerce Edition Input Validation Holes Permit Cross-Site ...

  18. U-116: IBM Tivoli Provisioning Manager Express for Software Distributi...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    for the affected ActiveX control Addthis Related Articles V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities V-094: IBM Multiple Products Multiple...

  19. V-122: IBM Tivoli Application Dependency Discovery Manager Java...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Automation Application Manager Multiple Vulnerabilities V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities T-694: IBM Tivoli Federated Identity...

  20. V-205: IBM Tivoli System Automation for Multiplatforms Java Multiple...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Automation Application Manager Multiple Vulnerabilities V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities V-122: IBM Tivoli Application...

  1. V-094: IBM Multiple Products Multiple Vulnerabilities | Department...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Multiple Vulnerabilities V-132: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities V-145: IBM Tivoli Federated Identity Manager Products Java Multiple ...

  2. V-145: IBM Tivoli Federated Identity Manager Products Java Multiple...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities April 30, 2013 - 12:09am Addthis PROBLEM: IBM Tivoli Federated Identity Manager Products Java ...

  3. August 15, 2001: IBM ASCI White | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    5, 2001: IBM ASCI White August 15, 2001: IBM ASCI White August 15, 2001: IBM ASCI White August 15, 2001 Lawrence Livermore National Laboratory dedicates the "world's fastest supercomputer," the IBM ASCI White supercomputer with 8,192 processors that perform 12.3 trillion operations per second.

  4. V-074: IBM Informix Genero libpng Integer Overflow Vulnerability |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy 74: IBM Informix Genero libpng Integer Overflow Vulnerability V-074: IBM Informix Genero libpng Integer Overflow Vulnerability January 22, 2013 - 12:11am Addthis PROBLEM: IBM Informix Genero libpng Integer Overflow Vulnerability PLATFORM: IBM Informix Genero releases prior to 2.41 - all platforms ABSTRACT: A vulnerability has been reported in libpng. REFERENCE LINKS: IBM Security Bulletin: 1620982 Secunia Advisory SA51905 Secunia Advisory SA48026 CVE-2011-3026 IMPACT

  5. V-132: IBM Tivoli System Automation Application Manager Multiple

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Vulnerabilities | Department of Energy 2: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities V-132: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities April 12, 2013 - 6:00am Addthis PROBLEM: IBM has acknowledged multiple vulnerabilities in IBM Tivoli System Automation Application Manager PLATFORM: The vulnerabilities are reported in IBM Tivoli System Automation Application Manager versions 3.1, 3.2, 3.2.1, and 3.2.2 ABSTRACT: Multiple security

  6. V-180: IBM Application Manager For Smart Business Multiple Vulnerabilities

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    | Department of Energy 0: IBM Application Manager For Smart Business Multiple Vulnerabilities V-180: IBM Application Manager For Smart Business Multiple Vulnerabilities June 18, 2013 - 12:38am Addthis PROBLEM: IBM Application Manager For Smart Business Multiple Vulnerabilities PLATFORM: IBM Application Manager For Smart Business 1.x ABSTRACT: A security issue and multiple vulnerabilities have been reported in IBM Application Manager For Smart Business REFERENCE LINKS: Security Bulletin

  7. U-181: IBM WebSphere Application Server Information Disclosure

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Vulnerability | Department of Energy 81: IBM WebSphere Application Server Information Disclosure Vulnerability U-181: IBM WebSphere Application Server Information Disclosure Vulnerability June 1, 2012 - 7:00am Addthis PROBLEM: A vulnerability has been reported in IBM WebSphere Application Server. PLATFORM: IBM WebSphere Application Server 6.1.x IBM WebSphere Application Server 7.0.x IBM WebSphere Application Server 8.0.x ABSTRACT: The vulnerability is caused due to missing access controls in

  8. V-132: IBM Tivoli System Automation Application Manager Multiple...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Application Manager versions 3.1, 3.2, 3.2.1, and 3.2.2 ABSTRACT: Multiple security vulnerabilities exist in the IBM Java Runtime Environment component of IBM Tivoli System ...

  9. U-198: IBM Lotus Expeditor Multiple Vulnerabilities | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    8: IBM Lotus Expeditor Multiple Vulnerabilities U-198: IBM Lotus Expeditor Multiple Vulnerabilities June 25, 2012 - 7:00am Addthis PROBLEM: Multiple vulnerabilities have been reported in IBM Lotus Expeditor. PLATFORM: IBM Lotus Expeditor 6.x ABSTRACT: The vulnerabilities can be exploited by malicious people to conduct cross-site scripting attacks, disclose potentially sensitive information, bypass certain security restrictions, and compromise a user's system.. Reference Links: Vendor Advisory

  10. V-145: IBM Tivoli Federated Identity Manager Products Java Multiple

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Vulnerabilities | Department of Energy 45: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities April 30, 2013 - 12:09am Addthis PROBLEM: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities PLATFORM: IBM Tivoli Federated Identity Manager versions 6.1, 6.2.0, 6.2.1, and 6.2.2. IBM Tivoli Federated Identity Manager Business Gateway versions 6.1.1, 6.2.0, 6.2.1

  11. The Easy Way of Finding Parameters in IBM (EWofFP-IBM)

    SciTech Connect (OSTI)

    Turkan, Nureddin [Bozok University, Faculty of Arts and Science, Department of Physics, Divanh Yolu, 66200 Yozgat (Turkey)

    2008-11-11

    E2/M1 multipole mixing ratios of even-even nuclei in transitional region can be calculated as soon as B(E2) and B(M1) values by using the PHINT and/or NP-BOS codes. The correct calculations of energies must be obtained to produce such calculations. Also, the correct parameter values are needed to calculate the energies. The logic of the codes is based on the mathematical and physical Statements describing interacting boson model (IBM) which is one of the model of nuclear structure physics. Here, the big problem is to find the best fitted parameters values of the model. So, by using the Easy Way of Finding Parameters in IBM (EWofFP-IBM), the best parameter values of IBM Hamiltonian for {sup 102-110}Pd and {sup 102-110}Ru isotopes were firstly obtained and then the energies were calculated. At the end, it was seen that the calculated results are in good agreement with the experimental ones. In addition, it was carried out that the presented energy values obtained by using the EWofFP-IBM are dominantly better than the previous theoretical data.

  12. Design and development of an IBM/VM menu system

    SciTech Connect (OSTI)

    Cazzola, D.J.

    1992-10-01

    This report describes a full screen menu system developed using IBM`s Interactive System Productivity Facility (ISPF) and the REXX programming language. The software was developed for the 2800 IBM/VM Electrical Computer Aided Design (ECAD) system. The system was developed to deliver electronic drawing definitions to a corporate drawing release system. Although this report documents the status of the menu system when it was retired, the methodologies used and the requirements defined are very applicable to replacement systems.

  13. T-681:IBM Lotus Symphony Multiple Unspecified Vulnerabilities

    Broader source: Energy.gov [DOE]

    Multiple unspecified vulnerabilities in IBM Lotus Symphony 3 before FP3 have unknown impact and attack vectors, related to "critical security vulnerability issues."

  14. V-054: IBM WebSphere Application Server for z/OS Arbitrary Command Execution Vulnerability

    Broader source: Energy.gov [DOE]

    A vulnerability was reported in the IBM HTTP Server component 5.3 in IBM WebSphere Application Server (WAS) for z/OS

  15. V-230: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    0: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting Vulnerabilities V-230: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting Vulnerabilities August 29, ...

  16. New ALS Technique Guides IBM in Next-Generation Semiconductor...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    chip, which then form transistors," says Jed Pitera, a research staff member in science and technology at IBM Research-Almaden. "But it's also really hard to do the...

  17. Design and development of an IBM/VM menu system

    SciTech Connect (OSTI)

    Cazzola, D.J.

    1992-10-01

    This report describes a full screen menu system developed using IBM's Interactive System Productivity Facility (ISPF) and the REXX programming language. The software was developed for the 2800 IBM/VM Electrical Computer Aided Design (ECAD) system. The system was developed to deliver electronic drawing definitions to a corporate drawing release system. Although this report documents the status of the menu system when it was retired, the methodologies used and the requirements defined are very applicable to replacement systems.

  18. Generalized Information Architecture for Managing Requirements in IBM?s Rational DOORS(r) Application.

    SciTech Connect (OSTI)

    Aragon, Kathryn M.; Eaton, Shelley M.; McCornack, Marjorie T.; Shannon, Sharon A.

    2014-12-01

    When a requirements engineering effort fails to meet expectations, often times the requirements management tool is blamed. Working with numerous project teams at Sandia National Laboratories over the last fifteen years has shown us that the tool is rarely the culprit; usually it is the lack of a viable information architecture with well- designed processes to support requirements engineering. This document illustrates design concepts with rationale, as well as a proven information architecture to structure and manage information in support of requirements engineering activities for any size or type of project. This generalized information architecture is specific to IBM's Rational DOORS (Dynamic Object Oriented Requirements System) software application, which is the requirements management tool in Sandia's CEE (Common Engineering Environment). This generalized information architecture can be used as presented or as a foundation for designing a tailored information architecture for project-specific needs. It may also be tailored for another software tool. Version 1.0 4 November 201

  19. V-211: IBM iNotes Multiple Vulnerabilities | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    1: IBM iNotes Multiple Vulnerabilities V-211: IBM iNotes Multiple Vulnerabilities August 5, 2013 - 6:00am Addthis PROBLEM: Multiple vulnerabilities have been reported in IBM Lotus iNotes PLATFORM: IBM iNotes 9.x ABSTRACT: IBM iNotes has two cross-site scripting vulnerabilities and an ActiveX Integer overflow vulnerability REFERENCE LINKS: Secunia Advisory SA54436 IBM Security Bulletin 1645503 CVE-2013-3027 CVE-2013-3032 CVE-2013-3990 IMPACT ASSESSMENT: High DISCUSSION: 1) Certain input related

  20. V-229: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site Scripting

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Attacks | Department of Energy 9: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site Scripting Attacks V-229: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site Scripting Attacks August 28, 2013 - 6:00am Addthis PROBLEM: Several vulnerabilities were reported in IBM Lotus iNotes PLATFORM: IBM Lotus iNotes 8.5.x ABSTRACT: IBM Lotus iNotes 8.5.x contains four cross-site scripting vulnerabilities REFERENCE LINKS: Security Tracker Alert ID 1028954 IBM Security Bulletin 1647740

  1. U-111: IBM AIX ICMP Processing Flaw Lets Remote Users Deny Service...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    aixefixessecurityicmpfix.tar Addthis Related Articles U-096: IBM AIX TCP Large Send Offload Bug Lets Remote Users Deny Service V-031: IBM WebSphere DataPower...

  2. U-114: IBM Personal Communications WS File Processing Buffer Overflow Vulnerability

    Office of Energy Efficiency and Renewable Energy (EERE)

    A vulnerability in WorkStation files (.ws) by IBM Personal Communications could allow a remote attacker to cause a denial of service (application crash) or potentially execute arbitrary code on vulnerable installations of IBM Personal Communications.

  3. V-147: IBM Lotus Notes Mail Client Lets Remote Users Execute...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    7: IBM Lotus Notes Mail Client Lets Remote Users Execute Java Applets V-147: IBM Lotus Notes Mail Client Lets Remote Users Execute Java Applets May 2, 2013 - 6:00am Addthis...

  4. U.S. Department of Energy and IBM to Collaborate in Advancing

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Supercomputing Technology | Department of Energy IBM to Collaborate in Advancing Supercomputing Technology U.S. Department of Energy and IBM to Collaborate in Advancing Supercomputing Technology November 15, 2006 - 9:25am Addthis Lawrence Livermore and Argonne National Lab Scientists to Work with IBM Designers WASHINGTON, DC -- The U.S. Department of Energy (DOE) announced today that its Office of Science, the National Nuclear Security Administration (NNSA) and IBM will share the cost of a

  5. V-230: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Vulnerabilities | Department of Energy 0: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting Vulnerabilities V-230: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting Vulnerabilities August 29, 2013 - 4:10am Addthis PROBLEM: Multiple vulnerabilities have been reported in IBM TRIRIGA Application Platform, which can be exploited by malicious people to conduct cross-site scripting attacks. PLATFORM: IBM TRIRIGA Application Platform 2.x ABSTRACT: The vulnerabilities are

  6. U-049: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Commands on the Target System | Department of Energy 49: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the Target System U-049: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the Target System December 1, 2011 - 9:00am Addthis PROBLEM: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the Target System. PLATFORM: IBM Tivoli Netcool Reporter prior to 2.2.0.8 ABSTRACT: A vulnerability was reported in IBM Tivoli Netcool

  7. International Border Management Systems (IBMS) Program : visions and strategies.

    SciTech Connect (OSTI)

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  8. EZVIDEO, FORTRAN graphics routines for the IBM AT

    SciTech Connect (OSTI)

    Patterson, M.R.; Holdeman, J.T.; Ward, R.C.; Jackson, W.L.

    1989-10-01

    A set of IBM PC-based FORTRAN plotting routines called EZVIDEO is described in this report. These routines are written in FORTRAN and can be called from FORTRAN programs. EZVIDEO simulates a subset of the well-known DISSPLA graphics calls and makes plots directly on the IBM AT display screen. Screen dumps can also be made to an attached LaserJet or Epson printer to make hard copy without using terminal emulators. More than forty DISSPLA calls are simulated by the EZVIDEO routines. Typical screen plots require about 10 seconds (s), and good hard copy of the screen image on a laser printer requires less than 2 minutes (min). This higher-resolution hard copy is adequate for most purposes because of the enhanced resolution of the screen in the EGA and VGA modes. These EZVIDEO routines give the IB, AT user a stand-alone capability to make useful scientific or engineering plots directly on the AT, using data generated in FORTRAN programs. The routines will also work on the IBM PC or XT in CGA mode, but they require more time and yield less resolution. 7 refs., 4 figs.

  9. WA_01_018_IBM_Waiver_of_Governement_US_and_Foreign_Patent_Ri.pdf |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy 1_018_IBM_Waiver_of_Governement_US_and_Foreign_Patent_Ri.pdf WA_01_018_IBM_Waiver_of_Governement_US_and_Foreign_Patent_Ri.pdf (18.1 MB) More Documents & Publications WA_04_053_IBM_CORP_Waiver_of_the_Government_U.S._and_Foreign.pdf WA_00_015_COMPAQ_FEDERAL_LLC_Waiver_Domestic_and_Foreign_Pat.pdf Advance Patent Waiver W(A)2002-023

  10. U-154: IBM Rational ClearQuest ActiveX Control Buffer Overflow Vulnerability

    Broader source: Energy.gov [DOE]

    A vulnerability was reported in IBM Rational ClearQuest. A remote user can cause arbitrary code to be executed on the target user's system.

  11. U-186: IBM WebSphere Sensor Events Multiple Vulnerabilities | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy 86: IBM WebSphere Sensor Events Multiple Vulnerabilities U-186: IBM WebSphere Sensor Events Multiple Vulnerabilities June 8, 2012 - 7:00am Addthis PROBLEM: Multiple vulnerabilities have been reported in IBM WebSphere Sensor Events PLATFORM: IBM WebSphere Sensor Events 7.x ABSTRACT: Some vulnerabilites have unknown impacts and others can be exploited by malicious people to conduct cross-site scripting attacks. Reference Links: Secunia ID 49413 No CVE references. Vendor URL IMPACT

  12. V-122: IBM Tivoli Application Dependency Discovery Manager Java Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    Multiple security vulnerabilities exist in the Java Runtime Environments (JREs) that can affect the security of IBM Tivoli Application Dependency Discovery Manager

  13. T-594: IBM solidDB Password Hash Authentication Bypass Vulnerability

    Broader source: Energy.gov [DOE]

    This vulnerability could allow remote attackers to execute arbitrary code on vulnerable installations of IBM solidDB. Authentication is not required to exploit this vulnerability.

  14. T-722: IBM WebSphere Commerce Edition Input Validation Holes Permit

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Cross-Site Scripting Attacks | Department of Energy 2: IBM WebSphere Commerce Edition Input Validation Holes Permit Cross-Site Scripting Attacks T-722: IBM WebSphere Commerce Edition Input Validation Holes Permit Cross-Site Scripting Attacks September 21, 2011 - 8:15am Addthis PROBLEM: IBM WebSphere Commerce Edition Input Validation Holes Permit Cross-Site Scripting Attacks. PLATFORM: WebSphere Commerce Edition V7.0 ABSTRACT: A remote user can access the target user's cookies (including

  15. U-116: IBM Tivoli Provisioning Manager Express for Software Distribution Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    Multiple vulnerabilities have been reported in IBM Tivoli Provisioning Manager Express for Software Distribution, which can be exploited by malicious people to conduct SQL injection attacks and compromise a user's system

  16. T-561: IBM and Oracle Java Binary Floating-Point Number Conversion Denial of Service Vulnerability

    Broader source: Energy.gov [DOE]

    IBM and Oracle Java products contain a vulnerability that could allow an unauthenticated, remote attacker to cause a denial of service (DoS) condition on a targeted system.

  17. New ALS Technique Guides IBM in Next-Generation Semiconductor Development

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    New ALS Technique Guides IBM in Next-Generation Semiconductor Development New ALS Technique Guides IBM in Next-Generation Semiconductor Development Print Wednesday, 21 January 2015 09:37 A new measurement technique developed at the ALS is helping guide the semiconductor industry in next-generation nanopatterning techniques. Directed self assembly (DSA) of block copolymers is an extremely promising strategy for high-volume, cost-effective semiconductor manufacturing at the nanoscale. Materials

  18. Lawrence Livermore and IBM Collaborate to Build New Brain-Inspired

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Supercomputer: Chip-architecture breakthrough accelerates path to exascale computing; helps computers tackle complex, cognitive tasks such as pattern recognition sensory processing | Department of Energy and IBM Collaborate to Build New Brain-Inspired Supercomputer: Chip-architecture breakthrough accelerates path to exascale computing; helps computers tackle complex, cognitive tasks such as pattern recognition sensory processing Lawrence Livermore and IBM Collaborate to Build New

  19. T-615: IBM Rational System Architect ActiveBar ActiveX Control Lets Remote Users Execute Arbitrary Code

    Broader source: Energy.gov [DOE]

    There is a high risk security vulnerability with the ActiveBar ActiveX controls used by IBM Rational System Architect.

  20. U-007: IBM Rational AppScan Import/Load Function Flaws Let Remote Users Execute Arbitrary Code

    Broader source: Energy.gov [DOE]

    Two vulnerabilities were reported in IBM Rational AppScan. A remote user can cause arbitrary code to be executed on the target user's system.

  1. Shape coexistence in the neutron-deficient Pt isotopes in the configuration-mixed IBM

    SciTech Connect (OSTI)

    Vargas, Carlos E.; Campuzano, Cuauhtemoc; Morales, Irving O.; Frank, Alejandro; Van Isacker, Piet

    2008-05-12

    The matrix-coherent state approach in the IBM with configuration mixing is used to describe the geometry of neutron-deficient Pt isotopes. Employing a parameter set for all isotopes determined previously, it is found that the lowest minimum goes from spherical to oblate and finally acquires a prolate shape when approaching the mid-shell Pt isotopes.

  2. Computing Legacy Software Behavior to Understand Functionality and Security Properties: An IBM/370 Demonstration

    SciTech Connect (OSTI)

    Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J; Sayre, Kirk D; Ankrum, Scott

    2013-01-01

    Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and security vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.

  3. Shape coexistence in the neutron-deficient Pt isotopes in a configuration mixing IBM

    SciTech Connect (OSTI)

    Morales, Irving O.; Vargas, Carlos E.; Frank, Alejandro

    2004-09-13

    The recently proposed matrix-coherent state approach for configuration mixing IBM is used to describe the evolving geometry of the neutron deficient Pt isotopes. It is found that the Potential Energy Surface (PES) of the Platinum isotopes evolves, when the number of neutrons decreases, from spherical to oblate and then to prolate shapes, in agreement with experimental measurements. Oblate-Prolate shape coexistence is observed in 194,192Pt isotopes.

  4. New ALS Technique Guides IBM in Next-Generation Semiconductor Development

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    New ALS Technique Guides IBM in Next-Generation Semiconductor Development Print A new measurement technique developed at the ALS is helping guide the semiconductor industry in next-generation nanopatterning techniques. Directed self assembly (DSA) of block copolymers is an extremely promising strategy for high-volume, cost-effective semiconductor manufacturing at the nanoscale. Materials that self-assemble spontaneously form nanostructures down to the molecular scale, which would revolutionize

  5. Statement by Guido DeHoratiis

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    will only address a subset of unconventional resources: shale gas, tight gas, shale oil, and tight oil, and a robust Federal research and development (R&D) plan is...

  6. How Would IBM's Quiz-Show Computer, Watson, Do as a Competitor in the

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    National Science Bowl? | U.S. DOE Office of Science (SC) How Would IBM's Quiz-Show Computer, Watson, Do as a Competitor in the National Science Bowl? News News Home Featured Articles 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 Science Headlines Science Highlights Presentations & Testimony News Archives Communications and Public Affairs Contact Information Office of Science U.S. Department of Energy 1000 Independence Ave., SW Washington, DC 20585 P: (202) 586-5430 05.17.16

  7. Studies of phase transitions and quantum chaos relationships in extended Casten triangle of IBM-1

    SciTech Connect (OSTI)

    Proskurins, J.; Andrejevs, A.; Krasta, T.; Tambergs, J. [University of Latvia, Institute of Solid State Physics (Latvia)], E-mail: juris_tambergs@yahoo.com

    2006-07-15

    A precise solution of the classical energy functional E(N, {eta}, {chi}; {beta}) minimum problem with respect to deformation parameter {beta} is obtained for the simplified Casten version of the standard interacting boson model (IBM-1) Hamiltonian. The first-order phase transition lines as well as the critical points of X(5), -X(5), and E(5) symmetries are considered. The dynamical criteria of quantum chaos-the basis state fragmentation width and the wave function entropy - are studied for the ({eta}, {chi}) parameter space of the extended Casten triangle, and the possible relationships between these criteria and phase transition lines are discussed.

  8. The conjugate gradient NAS parallel benchmark on the IBM SP1

    SciTech Connect (OSTI)

    Trefethen, A.E.; Zhang, T.

    1994-12-31

    The NAS Parallel Benchmarks are a suite of eight benchmark problems developed at the NASA Ames Research Center. They are specified in such a way that the benchmarkers are free to choose the language and method of implementation to suit the system in which they are interested. In this presentation the authors will discuss the Conjugate Gradient benchmark and its implementation on the IBM SP1. The SP1 is a parallel system which is comprised of RS/6000 nodes connected by a high performance switch. They will compare the results of the SP1 implementation with those reported for other machines. At this time, such a comparison shows the SP1 to be very competitive.

  9. Additive synthesis with DIASS-M4C on Argonne National Laboratory`s IBM POWERparallel System (SP)

    SciTech Connect (OSTI)

    Kaper, H.; Ralley, D.; Restrepo, J.; Tiepei, S.

    1995-12-31

    DIASS-M4C, a digital additive instrument was implemented on the Argonne National Laboratory`s IBM POWER parallel System (SP). This paper discusses the need for a massively parallel supercomputer and shows how the code was parallelized. The resulting sounds and the degree of control the user can have justify the effort and the use of such a large computer.

  10. Intelligent Bioreactor Management Information System (IBM-IS) for Mitigation of Greenhouse Gas Emissions

    SciTech Connect (OSTI)

    Paul Imhoff; Ramin Yazdani; Don Augenstein; Harold Bentley; Pei Chiu

    2010-04-30

    Methane is an important contributor to global warming with a total climate forcing estimated to be close to 20% that of carbon dioxide (CO2) over the past two decades. The largest anthropogenic source of methane in the US is 'conventional' landfills, which account for over 30% of anthropogenic emissions. While controlling greenhouse gas emissions must necessarily focus on large CO2 sources, attention to reducing CH4 emissions from landfills can result in significant reductions in greenhouse gas emissions at low cost. For example, the use of 'controlled' or bioreactor landfilling has been estimated to reduce annual US greenhouse emissions by about 15-30 million tons of CO2 carbon (equivalent) at costs between $3-13/ton carbon. In this project we developed or advanced new management approaches, landfill designs, and landfill operating procedures for bioreactor landfills. These advances are needed to address lingering concerns about bioreactor landfills (e.g., efficient collection of increased CH4 generation) in the waste management industry, concerns that hamper bioreactor implementation and the consequent reductions in CH4 emissions. Collectively, the advances described in this report should result in better control of bioreactor landfills and reductions in CH4 emissions. Several advances are important components of an Intelligent Bioreactor Management Information System (IBM-IS).

  11. ISTUM PC: industrial sector technology use model for the IBM-PC

    SciTech Connect (OSTI)

    Roop, J.M.; Kaplan, D.T.

    1984-09-01

    A project to improve and enhance the Industrial Sector Technology Use Model (ISTUM) was originated in the summer of 1983. The project had dix identifiable objectives: update the data base; improve run-time efficiency; revise the reference base case; conduct case studies; provide technical and promotional seminars; and organize a service bureau. This interim report describes which of these objectives have been met and which tasks remain to be completed. The most dramatic achievement has been in the area of run-time efficiency. From a model that required a large proportion of the total resources of a mainframe computer and a great deal of effort to operate, the current version of the model (ISTUM-PC) runs on an IBM Personal Computer. The reorganization required for the model to run on a PC has additional advantages: the modular programs are somewhat easier to understand and the data base is more accessible and easier to use. A simple description of the logic of the model is given in this report. To generate the necessary funds for completion of the model, a multiclient project is proposed. This project will extend the industry coverage to all the industrial sectors, including the construction of process flow models for chemicals and petroleum refining. The project will also calibrate this model to historical data and construct a base case and alternative scenarios. The model will be delivered to clients and training provided. 2 references, 4 figures, 3 tables.

  12. T-559: Stack-based buffer overflow in oninit in IBM Informix Dynamic Server (IDS) 11.50 allows remote execution

    Broader source: Energy.gov [DOE]

    Stack-based buffer overflow in oninit in IBM Informix Dynamic Server (IDS) 11.50 allows remote execution attackers to execute arbitrary code via crafted arguments in the USELASTCOMMITTED session environment option in a SQL SET ENVIRONMENT statement

  13. Study of Even-Even/Odd-Even/Odd-Odd Nuclei in Zn-Ga-Ge Region in the Proton-Neutron IBM/IBFM/IBFFM

    SciTech Connect (OSTI)

    Yoshida, N.; Brant, S.; Zuffi, L.

    2009-08-26

    We study the even-even, odd-even and odd-odd nuclei in the region including Zn-Ga-Ge in the proton-neutron IBM and the models derived from it: IBM2, IBFM2, IBFFM2. We describe {sup 67}Ga, {sup 65}Zn, and {sup 68}Ga by coupling odd particles to a boson core {sup 66}Zn. We also calculate the beta{sup +}-decay rates among {sup 68}Ge, {sup 68}Ga and {sup 68}Zn.

  14. IBM Blue Gene Architecture

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    How to use Open|SpeedShop to Analyze the Performance of Parallel Codes. Donald Frederick LLNL LLNL---PRES---508651 Performance Analysis is becoming more important ± Complex architectures ± Complex applications ± Mapping applications onto architectures Often hard to know where to start ± Which experiments to run first? ± How to plan follow---on experiments? ± What kind of problems can be explored? ± How to interpret the data? How to use OSS to Analyze the Performance of Parallel Codes? 2

  15. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

    SciTech Connect (OSTI)

    Zhou, Shujia; Duffy, Daniel; Clune, Thomas; Suarez, Max; Williams, Samuel; Halem, Milton

    2009-01-10

    The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratio of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.

  16. Survivability enhancement study for C/sup 3/I/BM (communications, command, control and intelligence/battle management) ground segments: Final report

    SciTech Connect (OSTI)

    Not Available

    1986-10-30

    This study involves a concept developed by the Fairchild Space Company which is directly applicable to the Strategic Defense Initiative (SDI) Program as well as other national security programs requiring reliable, secure and survivable telecommunications systems. The overall objective of this study program was to determine the feasibility of combining and integrating long-lived, compact, autonomous isotope power sources with fiber optic and other types of ground segments of the SDI communications, command, control and intelligence/battle management (C/sup 3/I/BM) system in order to significantly enhance the survivability of those critical systems, especially against the potential threats of electromagnetic pulse(s) (EMP) resulting from high altitude nuclear weapon explosion(s). 28 figs., 2 tabs.

  17. IBM Probes Material Capabilities at the ALS

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and temperature-dependent x-ray absorption spectroscopy experiments, in conjunction with x-ray diffraction and electrical transport measurements. The researchers were able to...

  18. U-179: IBM Java 7 Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    Vulnerabilities can be exploited by malicious users to disclose certain information and by malicious people to disclose potentially sensitive information, hijack a user's session, conduct DNS cache poisoning attacks, manipulate certain data, cause a DoS (Denial of Service), and compromise a vulnerable system.

  19. V-118: IBM Lotus Domino Multiple Vulnerabilities | Department...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    to version 9.0 or update to version 8.5.3 Fix Pack 4 when available Addthis Related Articles T-534: Vulnerability in the PDF distiller of the BlackBerry Attachment Service...

  20. U-139: IBM Tivoli Directory Server Input Validation Flaw

    Broader source: Energy.gov [DOE]

    The Web Admin Tool does not properly filter HTML code from user-supplied input before displaying the input.

  1. V-161: IBM Maximo Asset Management Products Java Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    Asset and Service Mgmt Products - Potential security exposure when using JavaTM based applications due to vulnerabilities in Java Software Developer Kits.

  2. T-694: IBM Tivoli Federated Identity Manager Products Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    This Security Alert addresses a serious security issue CVE-2010-4476 (Java Runtime Environment hangs when converting "2.2250738585072012e-308" to a binary floating-point number). This vulnerability might cause the Java Runtime Environment to hang, be in infinite loop, and/or crash resulting in a denial of service exposure. This same hang might occur if the number is written without scientific notation (324 decimal places). In addition to the Application Server being exposed to this attack, any Java program using the Double.parseDouble method is also at risk of this exposure including any customer written application or third party written application.

  3. Measurement of the Neutron Radius of 208Pb Through Parity-Violation...

    Office of Scientific and Technical Information (OSTI)

    ; Cusanno, Francesco ; Dalton, Mark ; De Leo, Raffaele ; De Jager, Cornelis ; Deconinck, ... Vincent ; Sutera, Concetta ; Tobias, William ; Troth, Wolfgang ; Urciuoli, Guido ; ...

  4. Search for: All records | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    ... ; Phillips, Jesse ; Reed, Phillip R. Abstract not provided. ... Feng, Chengcheng ; Bixler, Nathan E. Abstract not provided. ... Daniel Peter ; Flores, Gregg J. ; Bartel, Timothy James ...

  5. A Core Hole in the Southwestern Moat of the Long Valley Caldera...

    Open Energy Info (EERE)

    in water level, temperatures, and fluid chemistry. Authors Harold A. Wollenberg, Michael L. Sorey, Christopher D. Farrar, Art F. White, S. Flexser and L.C. Bartel Published...

  6. A Large Hadron Electron Collider at CERN (Journal Article) |...

    Office of Scientific and Technical Information (OSTI)

    M. ; Brookhaven ; Barber, D. ; Daresbury DESY Liverpool U. ; Bartels, J. ; Hamburg, Tech. U. ; Behnke, O. ; DESY ; Behr, J. ; DESY ; Belyaev, A.S. ; Rutherford...

  7. U-096: IBM AIX TCP Large Send Offload Bug Lets Remote Users Deny Service

    Broader source: Energy.gov [DOE]

    A remote user can send a series of specially crafted TCP packets to trigger a kernel panic on the target system.

  8. Design procedure for pollutant loadings and impacts for highway stormwater runoff (IBM version) (for microcomputers). Software

    SciTech Connect (OSTI)

    Not Available

    1990-01-01

    The interactive computer program was developed to make a user friendly procedure for the personal computer for calculations and guidance to make estimations of pollutant loadings and impacts from highway stormwater runoff which are presented in the Publication FHWA-RD-88-006, Pollutant Loadings and Impacts from Highway Stormwater Runoff, Volume I: Design Procedure. The computer program is for the evaluation of the water quality impact from highway stormwater runoff to a lake or a stream from a specific highway site considering the necessary rainfall data and geographic site situation. The evaluation considers whether or not the resulting water quality conditions can cause a problem as indicated by the violations of water quality criteria or objectives.

  9. V-229: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    CVE-2013-0591 CVE-2013-0595 IMPACT ASSESSMENT: Medium DISCUSSION: The software does not properly filter HTML code from user-supplied input before displaying the input. ...

  10. T-559: Stack-based buffer overflow in oninit in IBM Informix...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    exploit this vulnerability. The specific flaw exists within the oninit process bound to TCP port 9088 when processing the arguments to the USELASTCOMMITTED option in a SQL query....

  11. U-154: IBM Rational ClearQuest ActiveX Control Buffer Overflow...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    V-020: Apple QuickTime Multiple Flaws Let Remote Users Execute Arbitrary Code U-126: Cisco Adaptive Security Appliances Port Forwarder ActiveX Control Buffer Overflow ...

  12. Survivability enhancement study for C/sup 3/I/BM (communications...

    Office of Scientific and Technical Information (OSTI)

    RELIABILITY; ELECTROMAGNETIC PULSES; COMMUNICATIONS; FEASIBILITY STUDIES; FIBER OPTICS; HARDENING; MILITARY EQUIPMENT; POWER SUPPLIES; PROGRESS REPORT; SURVIVAL TIME;...

  13. U.S. Department of Energy and IBM to Collaborate in Advancing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    TECHNOLOGIES OFFICE U.S. Department of Energy's Wind Program Funding in the United States: Workforce Development Projects Report Fiscal Years 2008 - 2014 WIND PROGRAM 1 Introduction Wind and Water Power Technologies Office The Wind and Water Power Technologies Office (WWPTO), within the U.S. Department of Energy's (DOE's) Office of Energy Efficiency and Renewable Energy (EERE), supports the development, deployment, and commercialization of wind and water power technologies. WWPTO works with a

  14. U-049: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    LaserJet Printers Unspecified Flaw Lets Remote Users Update Firmware with Arbitrary Code U-053: Linux kexec Bugs Let Local and Remote Users Obtain Potentially Sensitive Information

  15. Survivability enhancement study for C/sup 3/I/BM (communications...

    Office of Scientific and Technical Information (OSTI)

    RELIABILITY; ELECTROMAGNETIC PULSES; COMMUNICATIONS; FEASIBILITY STUDIES; FIBER OPTICS; HARDENING; MILITARY EQUIPMENT; POWER SUPPLIES; PROGRESS REPORT; SURVIVAL TIME; ...

  16. U.S. Department of Energy and IBM to Collaborate in Advancing...

    Office of Environmental Management (EM)

    (DOE) announced today that its Office of Science, the National Nuclear Security ... (R&D) effort to further enhance the capabilities of the fastest computer in existence. ...

  17. Lawrence Livermore and IBM Collaborate to Build New Brain-Inspired...

    Office of Environmental Management (EM)

    that these will enable will change how we do science." The technology represents a fundamental departure from computer design that has been prevalent for the past 70 years, ...

  18. T-722: IBM WebSphere Commerce Edition Input Validation Holes...

    Broader source: Energy.gov (indexed) [DOE]

    recently submitted by the target user via web form to the site, or take actions on the ... recently submitted by the target user via web form to the site, or take actions on the ...

  19. DOE's Shale Gas and Hydraulic Fracturing Research | Department...

    Energy Savers [EERE]

    Statement of Guido DeHoratiis Acting Deputy Assistant Secretary for Oil and Natural Gas ... performance of developing our Nation's unconventional oil and natural gas (UOG) resources. ...

  20. Search for: All records | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    ... Gilman, Ronald (14) Hansen, Jens-Ole (14) Higinbotham, Douglas (13) Urciuoli, Guido (13) ... this experiment, and rule out conclusively long-standing predictions of dimensional ...

  1. Before the Subcommittees on Energy and Environment - House Committee...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Technology Testimony of Guido DeHoratiis, Acting Deputy Assistant Secretary for Oil and Gas, Office of Fossil Energy Before the Subcommittees on Energy and Environment - House...

  2. Blog Feed: Vehicles | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    and Renewable Energy Postdoctoral Research Awards. | Photo courtesy of Dr. Guido Bender, NREL. 10 Questions for a Materials Scientist: Brian Larsen Meet Brian Larsen, who is...

  3. Simulation of High-Resolution Magnetic Resonance Images on the IBM Blue Gene/L Supercomputer Using SIMRI

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Baum, K. G.; Menezes, G.; Helguera, M.

    2011-01-01

    Medical imaging system simulators are tools that provide a means to evaluate system architecture and create artificial image sets that are appropriate for specific applications. We have modified SIMRI, a Bloch equation-based magnetic resonance image simulator, in order to successfully generate high-resolution 3D MR images of the Montreal brain phantom using Blue Gene/L systems. Results show that redistribution of the workload allows an anatomically accurate 256 3 voxel spin-echo simulation in less than 5 hours when executed on an 8192-node partition of a Blue Gene/L system.

  4. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    SciTech Connect (OSTI)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  5. Nuclear matrix elements for 0??{sup ?}?{sup ?} decays: Comparative analysis of the QRPA, shell model and IBM predictions

    SciTech Connect (OSTI)

    Civitarese, Osvaldo; Suhonen, Jouni

    2013-12-30

    In this work we report on general properties of the nuclear matrix elements involved in the neutrinoless double ?{sup ?} decays (0??{sup ?}?{sup ?} decays) of several nuclei. A summary of the values of the NMEs calculated along the years by the Jyvskyl-La Plata collaboration is presented. These NMEs, calculated in the framework of the quasiparticle random phase approximation (QRPA), are compared with those of the other available calculations, like the Shell Model (ISM) and the interacting boson model (IBA-2)

  6. Search for: All records | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    ... Filter by Author Yan, Xinhu (4) Boeglin, Werner (3) Chen, Jian-Ping (3) Cusanno, Francesco (3) Garibaldi, Franco (3) Markowitz, Pete (3) Urciuoli, Guido (3) Ye, Yunxiu (3) ...

  7. Before the Subcommittees on Energy and Environment- House Committee on Science, Space, and Technology

    Office of Energy Efficiency and Renewable Energy (EERE)

    Subject: Interagency Working Group to Support Safe and Responsible Development of Unconventional Domestic Natural Gas Resources By: Guido DeHoratiis, Acting Deputy Assistant Secretary for Oil and Gas, Office of Fossil Energy

  8. Magnetic soft x-ray microscopy ...

    Office of Scientific and Technical Information (OSTI)

    Im ', Lars Bocklage, Guido Meier and Peter Fischer' ' C e ir te r -f o rX a y Optics, Lawrence Berkeley N ational Laboratory, Berkeley, CA 94720, USA Institut fiir A ...

  9. 10 Questions for a Materials Scientist: Brian Larsen | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Brian Larsen 10 Questions for a Materials Scientist: Brian Larsen January 24, 2013 - 10:50am Addthis Brian Larsen is developing the next generation of fuel cell catalysts thanks to the Energy Efficiency and Renewable Energy Postdoctoral Research Awards. | Photo courtesy of Dr. Guido Bender, NREL. Brian Larsen is developing the next generation of fuel cell catalysts thanks to the Energy Efficiency and Renewable Energy Postdoctoral Research Awards. | Photo courtesy of Dr. Guido Bender, NREL.

  10. Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30%

    Broader source: Energy.gov [DOE]

    Remember when IBMs super computer Watson defeated Jeopardy! champions Ken Jennings and Brad Rutter? With funding from the U.S. Department of Energy SunShot Initiative, IBM researchers are using...

  11. JC3 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    were reported in HP Service Manager April 30, 2013 V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities IBM Tivoli Federated Identity Manager...

  12. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    were reported in HP Service Manager April 30, 2013 V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities IBM Tivoli Federated Identity Manager...

  13. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM Compiler Optimization Options June 4, 2002 | Author(s): M. Stewart | Download File: optarg.ppt | ppt | 53 KB All of the IBM supplied compilers produce unoptimized code by...

  14. U007_Plateau_Training_Records_System-PIA.pdf

    Energy Savers [EERE]

    Supercomputing Technology | Department of Energy IBM to Collaborate in Advancing Supercomputing Technology U.S. Department of Energy and IBM to Collaborate in Advancing Supercomputing Technology November 15, 2006 - 9:25am Addthis Lawrence Livermore and Argonne National Lab Scientists to Work with IBM Designers WASHINGTON, DC -- The U.S. Department of Energy (DOE) announced today that its Office of Science, the National Nuclear Security Administration (NNSA) and IBM will share the cost of a

  15. Motor Current Data Collection System

    Energy Science and Technology Software Center (OSTI)

    1992-12-01

    The Motor Current Data Collection System (MCDCS) uses IBM compatible PCs to collect, process, and store Motor Current Signature information.

  16. Disk Quota | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please...

  17. Driving Operational Changes through an Energy Monitoring System

    SciTech Connect (OSTI)

    2012-08-01

    Institutional change case study details IBM's corporate efficiency program focused on basic operation improvements in its diverse real estate operations.

  18. DOE Fuel Cell Pre-Solicitation Workshop - Breakout Group 2: MEAs, Components, and Integration

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Solicitation Workshop 1 March 2010 BREAKOUT GROUP 2: MEAS, COMPONENTS AND INTEGRATION PARTICIPANTS NAME ORGANIZATION Jeff Allen Michigan Tech Guido Bender National Renewable Energy Laboratory Don Connors Ballard Material Products James Cross NUVERA Rick Daniels Advent Technologies North America Mark Debe 3M Emory DeCastro BASF Fuel Cell Mohammad Enayotullah Trenergi Corporation Jim Fenton University of Central Florida/FSEC Ashok Gidwani CFD Research Corporation Craig Gittleman General Motors

  19. DOE Fuel Cell Pre-Solicitation Workshop Participants

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Participants List Name Organization Jesse Adams U.S. Department of Energy Kev Adjemian NISSAN MOTOR Ltd, Japan Mark A. Aitken Ansaldo Fuel Cells Tim Armstrong Oak Ridge National Laboratory Radoslav Atanasoski 3M Plamen Atanassov ONM Chris Bajorek Intematix Suresh Baskaran Pacific Northwest National Laboratory David Beatty Ida Tech Guido Bender University of Hawaii Tom Benjamin Argonne National Laboratory Brian Borglum Versa Power Rod Borup Los Alamos National Laboratory Gerardine G. Botte Ohio

  20. DOE Fuel Cell Pre-Solicitiation Workshop Participants List

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    DOE Fuel Cell Pre-Solicitiation Workshop Sheraton Denver West Hotel, 360 Union Boulevard, Lakewood, CO Name Shabbir Ahmed Chris Ainscough Jeffrey S. Allen Kateryna Artyushkova Radoslav Atanasoski Plamen Atanassov Iouri I. Balachov Guido Bender Tom Benjamin Dan Birmingham Jim Boncella Rod Borup Stephanie Byham Stephen A. Campbell David Cocke Kevin Colbow Don Connors Vince Contini James Cross Rick Cutright Rick Daniels Sally Davies Emory S. De Castro Mark K. Debe Gerald DeCullo Chris Detjen Huyen

  1. This list does not imply DOE endorsement of the individuals or companies identif

    Broader source: Energy.gov (indexed) [DOE]

    Steam System Assessment Tool (SSAT) Qualified Specialists June 2016 Name E-mail Address Phone Number Location A Alas, Victor vmalas@crimson.ua.edu 256-473-3486 AL Allen, Ron rallen@onsitenergy.com 530-304-4454 CA Altfeather, Nate altfeathern@saic.com 608-443-8458 WI Anderson, Bob randerson@barr.com 952-832-2721 MN Aue, Jerry jaue@charter.net 715-343-6118 WI B Baesel, Bryan bbaesel@cec-consultants.com 216-749-2992 OH Banuri, Nishit nbanuri3@gmail.com 304-685-6247 PA Bartels, Jeff

  2. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Presentations Presentations Sort by: Default | Name | Date (low-high) | Date (high-low) | Source | Category IBM Compiler Optimization Options June 4, 2002 | Author(s): M. Stewart | Download File: optarg.ppt | ppt | 53 KB All of the IBM supplied compilers produce unoptimized code by default. Specific optimization command line options must be supplied to the compilers in order for them to produce optimized code. In this talk, several of the more useful optimization options for the IBM Fortran, C,

  3. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM Compiler Optimization Options June 4, 2002 | Author(s): M. Stewart | Download File: optarg.ppt | ppt | 53 KB All of the IBM supplied compilers produce unoptimized code by default. Specific optimization command line options must be supplied to the compilers in order for them to produce optimized code. In this talk, several of the more useful optimization options for the IBM Fortran, C, and C++ compilers are described and recommendations will be given on which of them are most useful.

  4. U.S. Department of Energy Interim E-QIP Procedures | Department...

    Broader source: Energy.gov (indexed) [DOE]

    Energy Security Symposium OE Releases Second Issue of Energy Emergency Preparedness Quarterly (April 2012) V-147: IBM Lotus Notes Mail Client Lets Remote Users Execute Java Applets...

  5. Armonk, New York: Energy Resources | Open Energy Information

    Open Energy Info (EERE)

    place in Westchester County, New York.1 Registered Energy Companies in Armonk, New York International Business Machines Corp IBM Windfarm Finance LLC References US Census...

  6. JC3 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ESX and ESXi March 29, 2013 V-122: IBM Tivoli Application Dependency Discovery Manager Java Multiple Vulnerabilities Multiple security vulnerabilities exist in the Java Runtime...

  7. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ESX and ESXi March 29, 2013 V-122: IBM Tivoli Application Dependency Discovery Manager Java Multiple Vulnerabilities Multiple security vulnerabilities exist in the Java Runtime...

  8. 1950s | OSTI, US Dept of Energy Office of Scientific and Technical...

    Office of Scientific and Technical Information (OSTI)

    1950: Display 1950: Documents 1950: Group Photo 1950: IBM Punch Cards 1950: Maintenance of Kodak Film Processor 1950: Atoms for Peace Program Material 1950: Troops Train ...

  9. DOE Shares Funding Opportunities and Honors Small Business Award...

    Energy Savers [EERE]

    ... Management & Operations Small Business Special Recognition: Princeton Plasma Physics Laboratory (Princeton, NJ) DOE Mentor of the Year: IBM (Washington, DC) DOE Protg of the ...

  10. Advance Patent Waiver W(A)2005-014

    Broader source: Energy.gov [DOE]

    This is a request by IBM for a DOE waiver of domestic and foreign patent rights under agreement W-7405-ENG-48.

  11. Driving Operational Changes Through an Energy Monitoring System

    Broader source: Energy.gov [DOE]

    Fact sheet describes a case study of IBM's corporate energy efficiency monitoring program that focuses on basic improvements in its real estate operations.

  12. Progress

    Office of Scientific and Technical Information (OSTI)

    per- formance on modern supercomputer architectures like the IBM POWER 5. MOTIVATION Modern large-scale parallel simulations produce data volumes that are on the orders of TBs. ...

  13. FACT SHEET: Obama Administration Announces Federal and Private...

    Broader source: Energy.gov (indexed) [DOE]

    ... Power State of California TECO Energy Tesla Westar Energy EXECUTIVE ACTIONS TO ... of Washington IBM (advisory board) Tesla Motors, Inc. (advisory board) Increasing ...

  14. Queueing & Running Jobs | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Running on BGQ Systems Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please...

  15. EERE Success Story-Solar Forecasting Gets a Boost from Watson, Accuracy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Improved by 30% | Department of Energy Forecasting Gets a Boost from Watson, Accuracy Improved by 30% EERE Success Story-Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30% October 27, 2015 - 11:48am Addthis IBM Youtube Video | Courtesy of IBM Remember when IBM's super computer Watson defeated Jeopardy! champions Ken Jennings and Brad Rutter? With funding from the U.S. Department of Energy SunShot Initiative, IBM researchers are using Watson-like technology to improve solar

  16. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    have been reported in Apache HTTP Server July 29, 2013 V-205: IBM Tivoli System Automation for Multiplatforms Java Multiple Vulnerabilities The weakness and the...

  17. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    "blue screen of death" after installation. April 12, 2013 V-132: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities Multiple security vulnerabilities exist...

  18. JC3 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    have been reported in Apache HTTP Server July 29, 2013 V-205: IBM Tivoli System Automation for Multiplatforms Java Multiple Vulnerabilities The weakness and the...

  19. JC3 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    "blue screen of death" after installation. April 12, 2013 V-132: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities Multiple security vulnerabilities exist...

  20. Secretary Chu Announces 150 Students to Receive Graduate Fellowships...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM Alexis Herman Former Secretary of Labor Chad Holliday, Jr. Former CEO of Dupont Michael McQuade Senior VP, United Technologies Corporation William Perry Former ...

  1. Microsoft PowerPoint - NERSC-NUG-yukiko-08

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Research and Evaluation Prototypes will continue to support the DARPA HPCS partnership's development of the Cray architecture and, in partnership with NNSA and IBM, support the ...

  2. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC...

    Office of Scientific and Technical Information (OSTI)

    Intel Institute for Defense Analyses University of California, San Diego IBM DARPA NVIDIA University of Tennessee Oak Ridge National Laboratory Lawrence Livermore ...

  3. SunShot Rooftop Challenge Awardees | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    enable multiple financing options for community solar programs. City University of New York City University of New York, NYC Department of Buildings, Procemx, CUNY Ventures, IBM,...

  4. News Item

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    California, Santa Barbara Catherine Murphy, University of Illinois at Urbana-Champaign Frances Ross, IBM Ned Seeman, New York University Donald Tennant, Cornell Nanoscale Science...

  5. Area schools get new computers through Los Alamos National Laboratory...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Area schools get new computers Area schools get new computers through Los Alamos National Laboratory, IBM partnership Northern New Mexico schools are recipients of fully loaded...

  6. V-215: NetworkMiner Directory Traversal and Insecure Library...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Addthis Related Articles U-198: IBM Lotus Expeditor Multiple Vulnerabilities U-146: Adobe ReaderAcrobat Multiple Vulnerabilities T-542: SAP Crystal Reports Server Multiple...

  7. Advance Patent Waiver W(A)2005-048

    Broader source: Energy.gov [DOE]

    This is a request by IBM BLUEGENE/P DESIGN, PHASE III for a DOE waiver of domestic and foreign patent rights under agreement W-7405-ENG-48.

  8. The Cell Processor

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Agenda Cell Processor Overview Programming the Cell Processor Concluding Remarks 3 2005 IBM Corporation Cell Highlights Observed clock speed > 4 GHz Peak performance (single ...

  9. Blue Gene/Q Network Performance Counters Monitoring Library

    Energy Science and Technology Software Center (OSTI)

    2015-03-12

    BGQNCL is a library to monitor and record network performance counters on the 5D torus interconnection network of IBM's Blue Gene/Q platform.

  10. Pete Beckman on Mira and Exascale

    ScienceCinema (OSTI)

    Pete Beckman

    2013-06-06

    Argonne's Pete Beckman, director of the Exascale Technology Computing Institute (ETCi), talks about the IBM Bluegene Q supercomputer and the future of computing and Exascale technology.

  11. Multiscale Simulations of Human Pathologies | Argonne Leadership...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Karniadakis, Paris Perdikaris, and Yue Yu, Brown University; Leopold Grinberg, IBM T. J. Watson Research Center and Brown University Multiscale Simulations of Human Pathologies PI ...

  12. Geometry of coexistence in the interacting boson model

    SciTech Connect (OSTI)

    Van Isacker, P.; Frank, A.; Vargas, C.E.

    2004-09-13

    The Interacting Boson Model (IBM) with configuration mixing is applied to describe the phenomenon of coexistence in nuclei. The analysis suggests that the IBM with configuration mixing, used in conjunction with a (matrix) coherent-state method, may be a reliable tool for the study of geometric aspects of shape coexistence in nuclei.

  13. Code System for Analysis of Piping Reliability Including Seismic Events.

    Energy Science and Technology Software Center (OSTI)

    1999-04-26

    Version 00 PC-PRAISE is a probabilistic fracture mechanics computer code developed for IBM or IBM compatible personal computers to estimate probabilities of leaks and breaks in nuclear power plant cooling piping. It iwas adapted from LLNL's PRAISE computer code.

  14. Multi Platform Graphics Subroutine Library

    Energy Science and Technology Software Center (OSTI)

    1992-02-21

    DIGLIB is a collection of general graphics subroutines. It was designed to be small, reasonably fast, device-independent, and compatible with DEC-supplied operating systems for VAXes, PDP-11s, and LSI-11s, and the DOS operating system for IBM PCs and IBM-compatible machines. The software is readily usable by casual programmers for two-dimensional plotting.

  15. Grid Week 2008 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Grid Week 2008 Grid Week 2008 September 24, 2008 - 3:43pm Addthis Remarks as Prepared for Secretary Bodman Thank you, Guido, for that kind introduction . . . and thank you Kevin for your leadership. You and your team, along with Grid Week's organizing committee and many partners, have done a terrific job putting together this event. I thank you all for being here today. Last year, those of us participating in the first Grid Week joined together to make the case for a concerted national effort to

  16. Helicity evolution at small-x

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Kovchegov, Yuri V.; Pitonyak, Daniel; Sievert, Matthew D.

    2016-01-13

    We construct small-x evolution equations which can be used to calculate quark and anti-quark helicity TMDs and PDFs, along with the g1 structure function. These evolution equations resum powers of αs ln2(1/x) in the polarization-dependent evolution along with the powers of αs ln(1/x) in the unpolarized evolution which includes saturation efects. The equations are written in an operator form in terms of polarization-dependent Wilson line-like operators. While the equations do not close in general, they become closed and self-contained systems of non-linear equations in the large-Nc and large-Nc & Nf limits. As a cross-check, in the ladder approximation, our equationsmore » map onto the same ladder limit of the infrared evolution equations for g1 structure function derived previously by Bartels, Ermolaev and Ryskin.« less

  17. ASC_machines_cielo_2

    National Nuclear Security Administration (NNSA)

    96 2000 2004 2008 2012 10 12 10 15 (petaFLOPS) ASC #1 Top500 winners ASC supercomputers ASC future supercomputers Leading High-Performance Computing ASCI Red * First sustained teraFLOPS machine * 1 teraFLOPS * #1 TOP500 6/97-11/00 * Intel ASCI White * First routinely shared tri-lab resource * 12 teraFLOPS * #1 TOP500 11/00-6/02 * IBM ASCI Blue Mountain * 3 teraFLOPS * SGI ASC Purple * 100 teraFLOPS * IBM * 3 teraFLOPS * IBM ASC Red Storm * 40 teraFLOPS * Cray ASCI Q * 20 teraFLOPS * Compaq

  18. PII: S0368-2048(98)00286-2

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Liquid crystal alignment by rubbed polymer surfaces: a microscopic bond orientation model J. Sto ¨hr * , M.G. Samant IBM Research Division, Almaden Research Center, 650 Harry Road, San Jose, CA 95120 USA Dedication by J. Sto ¨hr - This paper is dedicated to Dick Brundle who for many years was my colleage at the IBM Almaden Research Center. Dick was responsible for my hiring by IBM, and over the years we interacted with each other in many roles - as each other's boss or simply as colleagues.

  19. Timeline of Events: 2001 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    1 Timeline of Events: 2001 August 15, 2001: IBM's ASCI White August 15, 2001: IBM's ASCI White Lawrence Livermore National Laboratory dedicates the "world's fastest supercomputer," the IBM ASCI White. Read more June 28, 2001: President Bush announces $85.7 million in Federal grants June 28, 2001: President Bush announces $85.7 million in Federal grants President Bush speaks to employees at DOE's Forrestal building in Washington, D.C. announcing $85.7 million in Federal grants. Read

  20. News Item

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    3, 2014 Time: 11:00 am Speaker: Bryan Jackson, IBM Title: IBM's Brain-Inspired Computing Systems and Ecosystem Location: 67-3111 Chemla Room Abstract: Over the past 6 years as part of the DARPA SyNAPSE program, IBM's Brain Inspired Computing group has created an end-to-end ecosystem that encompasses the entire development stack for neural-inspired applications. Algorithms and applications are first developed in our new programming language and environment; they are then simulated using our

  1. QPX Architecture

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    QPX Architecture Quad Processing eXtension to the Power ISA TM May 9, 2012 Thomas Fox foxy@us.ibm.com QPX Architecture 2 Chapter 1. Quad-Vector Floating-Point Facility Overview This document defines the Quad-Processing eXtension (QPX) to IBM's Power Instruction Set Architecture. Refer to IBM's Power ISA TM AS architecture document for descriptions of the base Power instruction set, the storage model, and related facilities available to the application programmer. The computational model of the

  2. US ITER | Media Corner

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    News Service, January 4, 2007 Head of US ITER project named IEEE Fellow ORNL News ... Power IBM press release, July 2, 2007 ITER Selects ANSYS Solutions for Design of ...

  3. Solar Forecasting Gets a Boost from Watson, Accuracy Improved...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30% Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30% October 27, 2015 - 11:48am Addthis IBM ...

  4. Introducing Mira, Argonne's Next-Generation Supercomputer

    SciTech Connect (OSTI)

    2013-03-19

    Mira, the new petascale IBM Blue Gene/Q system installed at the ALCF, will usher in a new era of scientific supercomputing. An engineering marvel, the 10-petaflops machine is capable of carrying out 10 quadrillion calculations per second.

  5. Blog Feed: Vehicles | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    from the latest Clean Energy Jobs Roundup. August 7, 2012 Principal Deputy Director Eric Toone, former ARPA-E Director Arun Majumdar, the Honorable Bart Gordon and IBM Research...

  6. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    was reported in IBM Tivoli Federated Identity Manager. January 18, 2013 V-072: Red Hat update for java-1.7.0-openjdk Red Hat has issued an update for java-1.7.0-openjdk....

  7. 2000 User Survey Results

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    "NERSC has been the most stable supercomputer center in the country particularly with the migration from the T3E to the IBM SP". "Makes supercomputing easy." Below are the survey...

  8. Item Management Control System

    Energy Science and Technology Software Center (OSTI)

    1993-08-06

    The Item Management Control System (IMCS) has been developed at Idaho National Engineering Laboratory to assist in organizing collections of documents using an IBM-PC or similar DOS system platform.

  9. Microsoft PowerPoint - Salishan 2005Adolfyweb

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Work funded by ASC, Office of Science, DARPA CCS-3 P A L CESC, April 2005, Washington DC ... Examine possible future systems - e.g. IBM PERCS (DARPA HPCS), BlueGeneP, ... ? Recent ...

  10. U-048: HP LaserJet Printers Unspecified Flaw Lets Remote Users...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    T-699: EMC AutoStart Buffer Overflows Let Remote Users Execute Arbitrary Code U-049: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the Target System...

  11. Solar Forecast Improvement Project

    Office of Energy Efficiency and Renewable Energy (EERE)

    For the Solar Forecast Improvement Project (SFIP), the Earth System Research Laboratory (ESRL) is partnering with the National Center for Atmospheric Research (NCAR) and IBM to develop more...

  12. Configuration

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... GB Login Nodes The four login nodes (IBM System x3650 M2), each having two quad-core Intel Xeon X5550 2.67 GHz processors, for a total of eight cores per node and 32 cores total. ...

  13. Bassi_intro_NUG06.ppt

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    speed * 6.7 TFlops theoretical peak system performance * 100 TB of usable disk space in GPFS (General Parallel Filesystem from IBM) * 2 login nodes * 6 VSD (GPFS) servers * The ...

  14. STRIPESPDSlidesApril_2010.pdf | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    STRIPESPDSlidesApril2010.pdf STRIPESPDSlidesApril2010.pdf PDF icon STRIPESPDSlidesApril2010.pdf More Documents & Publications The document title is Arial, 32-point bold. IBM...

  15. 2011 NERSC User Survey (Read Only)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sun Solaris IBM AIX HP HPUX SGI IRIX Other PC Systems Windows 7 Windows Vista Windows XP Windows 2000 Other Windows Mac Systems MacOS X MacOS 9 or earlier Other Mac Other...

  16. Mira/Cetus/Vesta | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System Overview Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Mira/Cetus/Vesta Mira Mira, an IBM Blue Gene/Q supercomputer at the Argonne Leadership Computing Facility, is equipped with

  17. Landscape NERSC template

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 1 A Comparison of Performance Analysis Tools on the NERSC SP Jonathan Carter NERSC User Services NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2 Performance Tools on the IBM SP * PE Benchmarker - IBM PSSP - Trace and visualize hardware counters values or MPI and user-defined events * Paraver - European Center for Parallelism at Barcelona (CEPBA) - Trace and visualize program states, hardware counters values, and

  18. Lawrence Livermore and Los Alamos

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Supercomputer: Chip-architecture breakthrough accelerates path to exascale computing; helps computers tackle complex, cognitive tasks such as pattern recognition sensory processing | Department of Energy and IBM Collaborate to Build New Brain-Inspired Supercomputer: Chip-architecture breakthrough accelerates path to exascale computing; helps computers tackle complex, cognitive tasks such as pattern recognition sensory processing Lawrence Livermore and IBM Collaborate to Build New

  19. INDIGO PINE | Department of Energy

    Broader source: Energy.gov (indexed) [DOE]

    Mira, the 10-petaflop IBM Blue Gene/Q system at Argonne National Laboratory, is capable of carrying out 10 quadrillion calculations per second. Each year researchers apply to the INCITE program to get to use this machine's incredible computing power. | Photo courtesy of Argonne National Lab. Mira, the 10-petaflop IBM Blue Gene/Q system at Argonne National Laboratory, is capable of carrying out 10 quadrillion calculations per second. Each year researchers apply to the INCITE program to get to use

  20. Microsoft PowerPoint - 2009 04 Salishan prog models CLEAN-Stunkel [Compatibility Mode]

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Salishan conference, April 2009 Impacts of Energy Efficiency on Supercomputer Programming Models Craig Stunkel, IBM Research IBM Research What is a programming model? What is a programming model? A programming model is a May be realized through one or A programming model is a story - A common conceptual framework - Used by application May be realized through one or more of: * Libraries * Language/compiler extensions - pragmas, - Used by application developers, algorithm designers,

  1. through Los Alamos National

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Area schools get new computers through Los Alamos National Laboratory, IBM partnership May 8, 2009 LOS ALAMOS, New Mexico, May 8, 2009-Thanks to a partnership between Los Alamos National Laboratory and IBM, Northern New Mexico schools are recipients of fully loaded desktop and laptop computers. Officials from the Laboratory's Community Programs Office, the Española School Board, and elected officials including Española Mayor Joseph Maestas recently dedicated the technology center at Española

  2. Case Study: Driving Operational Changes Through an Energy Monitoring System

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    reporting, checklists, energy targets, and feedback leads to effective organizational change. Driving Operational Changes Through an Energy Monitoring System In 2006, IBM launched a corporate effciency program focused on basic operation im- provements in its diverse and far-fung real estate operations. The effciency program had behavior change as a major focus. Examples of changes include the following: * IBM implemented a monthly energy reporting system for its various facilities where

  3. INCITE Program Doles Out Hours on Supercomputers | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    INCITE Program Doles Out Hours on Supercomputers INCITE Program Doles Out Hours on Supercomputers November 5, 2012 - 1:30pm Addthis Mira, the 10-petaflop IBM Blue Gene/Q system at Argonne National Laboratory, is capable of carrying out 10 quadrillion calculations per second. Each year researchers apply to the INCITE program to get to use this machine's incredible computing power. | Photo courtesy of Argonne National Lab. Mira, the 10-petaflop IBM Blue Gene/Q system at Argonne National

  4. Using gpfs 2.2 to enable a cross platform accessibility of singlestorage

    SciTech Connect (OSTI)

    Baird, Will

    1994-12-01

    With IBM's aid I have conducted a cross compatibility test of GPFS 2.2 between an IBM F50 Power2 running AIX 5.2 ML/3 and 8 Dual Pentium 4/2.2 GHz running Redhat 9.0. The objective was to demonstrate a single shared instance of the file system and storage between the disparate operating systems and hardware systems. The cross compatibility test was successful. The chronology of events that led to this successful test are documented below.

  5. Interacting boson model from energy density functionals: {gamma}-softness and the related topics

    SciTech Connect (OSTI)

    Nomura, K.

    2012-10-20

    A comprehensive way of deriving the Hamiltonian of the interacting boson model (IBM) is described. Based on the fact that the multi-nucleon induced surface deformation in finite nucleus is simulated by effective boson degrees of freedom, the potential energy surface calculated with self-consistent mean-field method employing a given energy density functional (EDF) is mapped onto the IBM analog, and thereby the excitation spectra and transition rates with good symmetry quantum numbers are calculated. Recent applications of the proposed approach are reported: (i) an alternative robust interpretation of the {gamma}-soft nuclei and (ii) shape coexistence in lead isotopes.

  6. Buildings Energy Data Book: 5.7 Appliances

    Buildings Energy Data Book [EERE]

    3 2007 Personal Computer Manufacturer Market Shares (Percent of Products Produced) Desktop Computer Portable Computer Company Market Share (%) Market Share (%) Dell 32% 25% Hewlett-Packard 24% 26% Gateway 5% 4% Apple 4% 9% Acer America 3% N/A IBM 1% N/A Micron 0% N/A Toshiba N/A 12% Levono (IBM) N/A 6% Sony N/A 5% Fujitsu Siemens N/A 1% Others 30% 13% Total 100% 100% Note(s): Source(s): Total Desktop Computer Units Shipped: 34,211,601 Total Portable Computer Units Shipped: 30,023,844

  7. BlueGene/Q Optimization Bob Walkup

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BlueGene/Q Optimization Bob Walkup walkup@us.ibm.com 914-945-1512 Performance : what to expect Using IBM XL compilers A case study from SC13 Aternative compilers for BGQ Libraries that can help performance OpenMP and MPI Some Useful Properties of BG/Q You normally get dedicated resources for computation and communication (not counting the file-system). Memory access is uniform ... no worrying about NUMA. Processes and threads are bound by CNK (compute node kernel). No context switches => very

  8. Explicit simulation of a midlatitude Mesoscale Convective System

    SciTech Connect (OSTI)

    Alexander, G.D.; Cotton, W.R.

    1996-04-01

    We have explicitly simulated the mesoscale convective system (MCS) observed on 23-24 June 1985 during PRE-STORM, the Preliminary Regional Experiment for the Stormscale Operational and Research and Meterology Program. Stensrud and Maddox (1988), Johnson and Bartels (1992), and Bernstein and Johnson (1994) are among the researchers who have investigated various aspects of this MCS event. We have performed this MCS simulation (and a similar one of a tropical MCS; Alexander and Cotton 1994) in the spirit of the Global Energy and Water Cycle Experiment Cloud Systems Study (GCSS), in which cloud-resolving models are used to assist in the formulation and testing of cloud parameterization schemes for larger-scale models. In this paper, we describe (1) the nature of our 23-24 June MCS dimulation and (2) our efforts to date in using our explicit MCS simulations to assist in the development of a GCM parameterization for mesoscale flow branches. The paper is organized as follows. First, we discuss the synoptic situation surrounding the 23-24 June PRE-STORM MCS followed by a discussion of the model setup and results of our simulation. We then discuss the use of our MCS simulation. We then discuss the use of our MCS simulations in developing a GCM parameterization for mesoscale flow branches and summarize our results.

  9. History of Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    208 16 3,328 3,328 1.0 GB IBM Colony 3,052 (2) 4,992 JWatson 1999 Cray Y-MP J90 Cray CMOS 100 MHz 1 32 32 8 250 MB SMP 6.4 FCrick 1999 Cray Y-MP J90 Cray CMOS 100 MHz 1 32 32 8...

  10. The quest data mining system

    SciTech Connect (OSTI)

    Agrawal, R.; Mehta, M.; Shafer, J.; Srikant, R.

    1996-12-31

    The goal of the Quest project at the IBM Almaden Research center is to develop technology to enable a new breed of data-intensive decision-support applications. This paper is a capsule summary of the current functionality and architecture of the Quest data mining System.

  11. Crystal Structure of the BIR1 Domain of XIAP in Two Crystal Forms

    SciTech Connect (OSTI)

    Lin,S.; Huang, Y.; Lo, Y.; Lu, M.; Wu, H.

    2007-01-01

    X-linked inhibitor of apoptosis (XIAP) is a potent negative regulator of apoptosis. It also plays a role in BMP signaling, TGF-{beta} signaling, and copper homeostasis. Previous structural studies have shown that the baculoviral IAP repeat (BIR2 and BIR3) domains of XIAP interact with the IAP-binding-motifs (IBM) in several apoptosis proteins such as Smac and caspase-9 via the conserved IBM-binding groove. Here, we report the crystal structure in two crystal forms of the BIR1 domain of XIAP, which does not possess this IBM-binding groove and cannot interact with Smac or caspase-9. Instead, the BIR1 domain forms a conserved dimer through the region corresponding to the IBM-binding groove. Structural and sequence analyses suggest that this dimerization of BIR1 in XIAP may be conserved in other IAP family members such as cIAP1 and cIAP2 and may be important for the action of XIAP in TGF-{beta} and BMP signaling and the action of cIAP1 and cIAP2 in TNF receptor signaling.

  12. MS FORTRAN Extended Libraries

    Energy Science and Technology Software Center (OSTI)

    1986-09-01

    DISPPAK is a set of routines for use with Microsoft FORTRAN programs that allows the flexible display of information on the screen of an IBM PC in both text and graphics modes. The text mode routines allow the cursor to be placed at an arbitrary point on the screen and text to be displayed at the cursor location, making it possible to create menus and other structured displays. A routine to set the color ofmore » the characters that these routines display is also provided. A set of line drawing routines is included for use with IBM''s Color Graphics Adapter or an equivalent board (such as the Enhanced Graphics Adapter in CGA emulation mode). These routines support both pixel coordinates and a user-specified set of real number coordinates. SUBPAK is a function library which allows Microsoft FORTRAN programs to calculate random numbers, issue calls to the operating system, read individual characters from the keyboard, perform Boolean and shift operations, and communicate with the I/O ports of the IBM PC. In addition, peek and poke routines, a routine that returns the address of any variable, and routines that can access the system time and date are included.« less

  13. EIA directory of electronic products. First quarter 1994

    SciTech Connect (OSTI)

    Not Available

    1994-04-01

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers.

  14. EIA directory of electronic products. Fourth quarter 1995

    SciTech Connect (OSTI)

    1996-08-01

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers.

  15. EIA directory of electronic products, Third quarter 1995

    SciTech Connect (OSTI)

    1996-02-01

    EIA makes available for public use a series of machine-readable data files and computer models on magnetic tapes. Selected data files/models are also available on diskette for IBM-compatible personal computers. For each product listed in this directory, a detailed abstract is provided which describes the data published. Ordering information is given in the preface. Indexes are included.

  16. New Advances in Neutrinoless Double Beta Decay Matrix Elements

    SciTech Connect (OSTI)

    Munoz, Jose Barea [Instituto de Estructura de la Materia, C.S.I.C. Unidad Asociada al Departamento de Fisica Atomica, Molecular y Nuclear, Facultad de Fisica, Universidad de Sevilla, Apartado 1065, 41080 Sevilla (Spain)

    2010-08-04

    We present the matrix elements necessary to evaluate the half-life of some neutrinoless double beta decay candidates in the framework of the microscopic interacting boson model (IBM). We compare our results with those from other models and extract some simple features of the calculations.

  17. Neutrinoless double beta decay in the microscopic interacting boson model

    SciTech Connect (OSTI)

    Iachello, F. [Center for Theoretical Physics, Sloane Physics Laboratory Yale University New Haven, CT 06520-8120 (United States)

    2009-11-09

    The results of a calculation of the nuclear matrix elements for neutrinoless double beta decay in the closure approximation in several nuclei within the framework of the microscopic interacting boson model (IBM-2) are presented and compared with those calculated in the shell model (SM) and quasiparticle random phase approximation (QRPA)

  18. Project Final Report: HPC-Colony II

    SciTech Connect (OSTI)

    Jones, Terry R; Kale, Laxmikant V; Moreira, Jose

    2013-11-01

    This report recounts the HPC Colony II Project which was a computer science effort funded by DOE's Advanced Scientific Computing Research office. The project included researchers from ORNL, IBM, and the University of Illinois at Urbana-Champaign. The topic of the effort was adaptive system software for extreme scale parallel machines. A description of findings is included.

  19. NUG2013UserSurvey.pptx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    --- NUG 2 013 --- 1 0 --- Advanced Architectures & Programming Models Architecture GPUs MulT Threaded MIC IBM C ell Big M PP 22.4% 9.2% 5.1% 3.1% Medium M PP 20.3% 14.2% 3.6% 2.5%...

  20. Intellectual Property (IP) Service Providers for Acquisition and Assistance

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Transactions | Department of Energy DOE_IP_Counsel_for_DOE_Laboratories 2015 (8.17 KB) More Documents & Publications Intellectual Property (IP) Service Providers for Acquisition and Assistance Transactions WA_05_056_IBM_WATSON_RESEARCH_CENTER_Waiver_of_Domestic_and_.pdf Need to Consider Intentional Destructive Acts in NEPA Documents (CEQ, 2006)

  1. July 28, 2010, Partnerships of academia, industry, and government labs

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    UNCLASSIFIED UNCLASSIFIED * Interdisciplinary nature of research * Rapid transition from research to products One size does not fit all Partnerships of academia, industry, and government labs UNCLASSIFIED UNCLASSIFIED Network Science Collaborative Technology Alliance: an Interdisciplinary Collaboration Model Social/Cognitive Network ARC * Principal Member - Rensselaer Polytechnic Institute * General Members - CUNY, Northeastern Univ, IBM Communication Networks ARC * Principal Member - Penn State

  2. Opportunities for high aspect ratio micro-electro-magnetic-mechanical systems (HAR-MEMMS) at Lawrence Berkeley Laboratory

    SciTech Connect (OSTI)

    Hunter, S.

    1993-10-01

    This report contains viewgraphs on the following topics: Opportunities for HAR-MEMMS at LBL; Industrial Needs and Opportunities; Deep Etch X-ray Lithography; MEMS Activities at BSAC; DNA Amplification with Microfabricated Reaction Chamber; Electrochemistry Research at LBL; MEMS Activities at LLNL; Space Microsensors and Microinstruments; The Advanced Light Source; Institute for Micromaching; IBM MEMS Interests; and Technology Transfer Opportunities at LBL.

  3. Electromagnetic Reciprocity.

    SciTech Connect (OSTI)

    Aldridge, David F.

    2014-11-01

    A reciprocity theorem is an explicit mathematical relationship between two different wavefields that can exist within the same space - time configuration. Reciprocity theorems provi de the theoretical underpinning for mod ern full waveform inversion solutions, and also suggest practical strategies for speed ing up large - scale numerical modeling of geophysical datasets . In the present work, several previously - developed electromagnetic r eciprocity theorems are generalized to accommodate a broader range of medi um, source , and receiver types. Reciprocity relations enabling the interchange of various types of point sources and point receivers within a three - dimensional electromagnetic model are derived. Two numerical modeling algorithms in current use are successfully tested for adherence to reciprocity. Finally, the reciprocity theorem forms the point of departure for a lengthy derivation of electromagnetic Frechet derivatives. These mathe matical objects quantify the sensitivity of geophysical electromagnetic data to variatio ns in medium parameters, and thus constitute indispensable tools for solution of the full waveform inverse problem. ACKNOWLEDGEMENTS Sandia National Labor atories is a multi - program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000. Signif icant portions of the work reported herein were conducted under a Cooperative Research and Development Agreement (CRADA) between Sandia National Laboratories (SNL) and CARBO Ceramics Incorporated. The author acknowledges Mr. Chad Cannan and Mr. Terry Pa lisch of CARBO Ceramics, and Ms. Amy Halloran, manager of SNL's Geophysics and Atmospheric Sciences Department, for their interest in and encouragement of this work. Special thanks are due to Dr . Lewis C. Bartel ( recently retired from Sandia National Labo ratories and now a

  4. 3081/E processor

    SciTech Connect (OSTI)

    Kunz, P.F.; Gravina, M.; Oxoby, G.; Rankin, P.; Trang, Q.; Ferran, P.M.; Fucci, A.; Hinton, R.; Jacobs, D.; Martin, B.

    1984-04-01

    The 3081/E project was formed to prepare a much improved IBM mainframe emulator for the future. Its design is based on a large amount of experience in using the 168/E processor to increase available CPU power in both online and offline environments. The processor will be at least equal to the execution speed of a 370/168 and up to 1.5 times faster for heavy floating point code. A single processor will thus be at least four times more powerful than the VAX 11/780, and five processors on a system would equal at least the performance of the IBM 3081K. With its large memory space and simple but flexible high speed interface, the 3081/E is well suited for the online and offline needs of high energy physics in the future.

  5. ICP-MS Data Analysis Software

    Energy Science and Technology Software Center (OSTI)

    1999-01-14

    VG2Xl - this program reads binary data files generated by VG instrumentals inductively coupled plasma-mass spectrometers using PlasmaQuad Software Version 4.2.1 and 4.2.2 running under IBM OS/2. ICPCalc - this module is a macro for Microsoft Excel written in VBA (Virtual Basic for Applications) that performs data analysis for ICP-MS data required for nuclear materials that cannot readily be done with the vendor''s software. VG2GRAMS - This program reads binary data files generated by VGmore » instruments inductively coupled plasma mass spectrometers using PlasmaQuad software versions 4.2.1 and 4.2.2 running under IBM OS/2.« less

  6. A valiant little terminal: A VLT user's manual

    SciTech Connect (OSTI)

    Weinstein, A.

    1992-08-01

    VLT came to be used at SLAC (Stanford Linear Accelerator Center), because SLAC wanted to assess the Amiga's usefulness as a color graphics terminal and T{sub E}X workstation. Before the project could really begin, the people at SLAC needed a terminal emulator which could successfully talk to the IBM 3081 (now the IBM ES9000-580) and all the VAXes on the site. Moreover, it had to compete in quality with the Ann Arbor Ambassador GXL terminals which were already in use at the laboratory. Unfortunately, at the time there was no commercial program which fit the bill. Luckily, Willy Langeveld had been independently hacking up a public domain VT100 emulator written by Dave Wecker et al. and the result, VLT, suited SLAC's purpose. Over the years, as the program was debugged and rewritten, the original code disappeared, so that now, in the present version of VLT, none of the original VT100 code remains.

  7. PRODEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP : HIGH PERFORMANCE COMPUTING WITH QCDOC AND BLUEGENE.

    SciTech Connect (OSTI)

    CHRIST,N.; DAVENPORT,J.; DENG,Y.; GARA,A.; GLIMM,J.; MAWHINNEY,R.; MCFADDEN,E.; PESKIN,A.; PULLEYBLANK,W.

    2003-03-11

    Staff of Brookhaven National Laboratory, Columbia University, IBM and the RIKEN BNL Research Center organized a one-day workshop held on February 28, 2003 at Brookhaven to promote the following goals: (1) To explore areas other than QCD applications where the QCDOC and BlueGene/L machines can be applied to good advantage, (2) To identify areas where collaboration among the sponsoring institutions can be fruitful, and (3) To expose scientists to the emerging software architecture. This workshop grew out of an informal visit last fall by BNL staff to the IBM Thomas J. Watson Research Center that resulted in a continuing dialog among participants on issues common to these two related supercomputers. The workshop was divided into three sessions, addressing the hardware and software status of each system, prospective applications, and future directions.

  8. heat_ghc02

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    he Hanford Story Tank Waste Cleanup he Hanford Story Tank Waste Cleanup Addthis Description The Hanford Story Tank Waste Cleanup

    2 3 4 5 6 7 8 0 2 4 6 8 10 20 30 40 50 60 70 Expansion Level Calculation Time (sec) IBM SP ∇∇∇ L=2, 64PE oooo L=2, 16PE ***** L=1, 64PE xxxx L=1, 16PE (a) 0 2 4 6 8 0.2 0.4 0.6 0.8 1 1.2 Expansion Level Calculation Slowdown IBM SP ∇∇∇ L=2, 64PE oooo L=2, 16PE ***** L=1, 64PE xxxx L=1, 16PE 0 2 4 6 8 0 5 10 15 20 25 Expansion Level Communication Time

  9. A study of electromagnetic characteristics of {sup 124,126,128,130,132,134,136}Ba isotopes performed in the framework of IBA

    SciTech Connect (OSTI)

    Turkan, N.

    2010-01-15

    It was pointed out that the level scheme of the transitional nuclei {sup 124,126,128,130,132,134,136}Ba also can be studied by both characteristics (IBM-1 and IBM-2) of the interacting boson model and an adequate point of the model leading to E2 transitions is therefore confirmed. Most of the {delta}(E2/M1) ratios that are still not known so far are stated and the set of parameters used in these calculations is the best approximation that has been carried out so far. It has turned out that the interacting boson approximation is fairly reliable for the calculation of spectra in the entire set of {sup 124,126,128,130,132,134,136}Ba isotopes.

  10. Salishan_Talk 4-14

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Yes Virginia, There is an HPSS in Your Future Dick Watson Lawrence Livermore National Laboratory 925-422-9216 dwatson@llnl.gov Development Partners - Lawrence Livermore National Laboratory - Oak Ridge National Laboratory - Los Alamos National Laboratory - Sandia National Laboratories - National Energy Research Scientific Computing Center - IBM HPSS Web Site URL: www.hpss-collaboration.org Prepared for Salishan Conference on High Speed Computing, Salishan Oregon, 4/24-27/2006 UCRL-PRES-220462 2

  11. Scalasca | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Data Transfer Debugging & Profiling Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource.

  12. Watt-Sun: A Multi-Scale, Multi-Model, Machine-Learning Solar Forecasting

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Technology | Department of Energy Watt-Sun: A Multi-Scale, Multi-Model, Machine-Learning Solar Forecasting Technology Watt-Sun: A Multi-Scale, Multi-Model, Machine-Learning Solar Forecasting Technology IBM logo.png As part of this project, new solar forecasting technology will be developed that leverages big data processing, deep machine learning, and cloud modeling integrated in a universal platform with an open architecture. Similar to the Watson computer system, this proposed technology

  13. GAMESS | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Performance Tools & APIs Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] GAMESS What Is GAMESS? The General Atomic and Molecular Electronic Structure System (GAMESS) is a general ab initio quantum chemistry package. For more information on GAMESS, see the Gordon research

  14. Recent developments in the theory of double beta decay

    SciTech Connect (OSTI)

    Iachello, F.; Kotila, J.; Barea, J.

    2013-12-30

    We report results of a novel calculation of phase space factors for 2??{sup +}?{sup +}, 2??{sup +}EC, 2?ECEC, 0??{sup +}?{sup +}, and 0??{sup +}EC using exact Dirac wave functions, and finite nuclear size and electron screening corrections. We present results of expected half-lives for 0??{sup +}?{sup +} and 0??{sup +}EC decays obtained by combining the calculation of phase space factors with IBM-2 nuclear matrix elements.

  15. Mira | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Featured Videos Mira: Argonne's 10-Petaflop Supercomputer Mira's Dedication Ceremony Introducing Mira: Our Next-Generation Supercomputer Mira Mira Ushers in a New Era of Scientific Supercomputing As one of the fastest supercomputers, Mira, our 10-petaflops IBM Blue Gene/Q system, is capable of 10 quadrillion calculations per second. With this computing power, Mira can do in one day what it would take

  16. Oak Ridge National Laboratory - Computing and Computational Sciences

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Directorate Oak Ridge to acquire next generation supercomputer Oak Ridge to acquire next generation supercomputer The U.S. Department of Energy's (DOE) Oak Ridge Leadership Computing Facility (OLCF) has signed a contract with IBM to bring a next-generation supercomputer to Oak Ridge National Laboratory (ORNL). The OLCF's new hybrid CPU/GPU computing system, Summit, will be delivered in 2017. (more) Links Department of Energy Consortium for Advanced Simulation of Light Water Reactors Extreme

  17. 1950s | OSTI, US Dept of Energy Office of Scientific and Technical

    Office of Scientific and Technical Information (OSTI)

    Information 50s To view OSTI Historical Photo Gallery, you can browse the collections below. 1940s | 1960s | 1970s | 1980s | 1990s | 2000s 1950: Remodeling Building 1950: Display 1950: Documents 1950: Group Photo 1950: IBM Punch Cards 1950: Maintenance of Kodak Film Processor 1950: Atoms for Peace Program Material 1950: Troops Train 1950: Manager 1951-1955 Armen Gregory Abdian 1950: United Nations 1950: Filing Cabinets 1950: Composition Section 1950: Geneva Conference 1950: International

  18. carverintro.ppt

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Carver David Turner NERSC User Services Group NUG Meeting, October 18, 2010 2 Tutorial Overview * Background * Hardware * Software * Programming * Running Jobs 3 Background * Replace Bassi and Jacquard * Hardware procurement - "Scalable Units" * Two funding sources - Carver * NERSC program funds - Magellan * ARRA funds 4 System Overview * IBM iDataPlex System - 14 compute racks * 80 nodes/rack (1120 total compute nodes) * "Water cooled" - 5 service racks * Login, I/O, and

  19. MADNESS | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] MADNESS Overview MADNESS is a numerical tool kit used to solve integral differential equations using multi-resolution analysis and a low-rank separation representation. MADNESS can solve multi-dimensional equations, currently up

  20. Magellan Explores Cloud Computing for DOE's Scientific Mission

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Explores Cloud Computing for DOE's Scientific Mission Magellan Explores Cloud Computing for DOE's Scientific Mission March 30, 2011 Cloud Control -This is a picture of the Magellan management and network control racks at NERSC. To test cloud computing for scientific capability, NERSC and the Argonne Leadership Computing Facility (ALCF) installed purpose-built testbeds for running scientific applications on the IBM iDataPlex cluster. (Photo Credit: Roy Kaltschmidt) Cloud computing is gaining

  1. About

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Archive » About About HPSS Mass Storage HPSS tape library The High Performance Storage System (HPSS) is a modern, flexible, performance-oriented mass storage system. It has been used at NERSC for archival storage since 1998. HPSS is Herarchical Storage Management (HSM) software developed by a collaboration of DOE labs, of which NERSC is a participant, and IBM. The HSM software enables all user data to be ingested onto high performance disk arrays and automatically migrated to a very large

  2. Organization-About-PHaSe-EFRC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    About Mission Statement (PDF) Organization Contact Us organization This webpage is provided for legacy archive purposes only, as of 30 April 2015. The day to day operations of the University of Massachusetts Amherst PHaSE EFRC are administered by co-directors. Russell is the Samuel Conte Distingushed Professor of Polymer Science and Engineering, with years of previous experience at IBM Research, over 620 publications and 21 patents for polymer chemistry and physics. Lahti has over 29 years at

  3. Performance Application Programming Interface

    Energy Science and Technology Software Center (OSTI)

    2005-10-31

    PAPI is a programming interface designed to provide the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events. This release covers the hardware dependent implementation of PAPI version 3 for the IBM BlueGene/L (BG/L) system.

  4. Determining Memory Use | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Determining Memory Use Determining the amount of memory available during the execution of the program requires the use of

  5. Timeline

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Timeline Timeline Date Event May 1, 2010 Account charging starts Mar 22, 2010 All active NERSC user accounts enabled Mar 17, 2010 Magellan queues added Mar 12, 2010 System accepted Feb 22, 2010 Selected NERSC user accounts enabled Jan 29, 2010 Acceptance Test Begins Jan 04, 2010 System integration begins at NERSC Oakland Scientific Facility Oct 06, 2009 Contract awarded to IBM by DOE Last edited: 2016-04-29 11:34:54

  6. Watson Workshop

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Home » Events » HPC Workshops » Watson Workshop Watson Workshop January 21, 2016 The Data and Analytics Services group is coordinating an IBM Watson Workshop at LBL and NERSC on Thursday, January 21. Watson is an artificial intelligence system combining advanced natural language processing, machine learning, and information retrieval technologies. The workshop provides attendees with an overview of Watson and an opportunity to explore potential partnerships with the Watson team. The overview

  7. Nanoscale Imaging of Strain using X-Ray Bragg Projection Ptychography |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Argonne National Laboratory Nanoscale Imaging of Strain using X-Ray Bragg Projection Ptychography October 1, 2012 Tweet EmailPrint Users of the Center for Nanoscale Materials (CNM) from IBM exploited nanofocused X-ray Bragg projection ptychography to determine the lattice strain profile in an epitaxial SiGe stressor layer of a silicon prototype device. The theoretical and experimental framework of this new coherent diffraction strain imaging approach was developed by Argonne's Materials

  8. EIA directory of electronic products fourth quarter 1993

    SciTech Connect (OSTI)

    Not Available

    1994-02-23

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers. For each product listed in this directory, a detailed abstract is provided which describes the data published.

  9. Benchmarking and tuning the MILC code on clusters and supercomputers

    SciTech Connect (OSTI)

    Steven A. Gottlieb

    2001-12-28

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha.

  10. Example Program and Makefile for BG/Q | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Example Program and Makefile for BG/Q

  11. An examination of electronic file transfer between host and microcomputers for the AMPMODNET/AIMNET (Army Material Plan Modernization Network/Acquisition Information Management Network) classified network environment

    SciTech Connect (OSTI)

    Hake, K.A.

    1990-11-01

    This report presents the results of investigation and testing conducted by Oak Ridge National Laboratory (ORNL) for the Project Manager -- Acquisition Information Management (PM-AIM), and the United States Army Materiel Command Headquarters (HQ-AMC). It concerns the establishment of file transfer capabilities on the Army Materiel Plan Modernization (AMPMOD) classified computer system. The discussion provides a general context for micro-to-mainframe connectivity and focuses specifically upon two possible solutions for file transfer capabilities. The second section of this report contains a statement of the problem to be examined, a brief description of the institutional setting of the investigation, and a concise declaration of purpose. The third section lays a conceptual foundation for micro-to-mainframe connectivity and provides a more detailed description of the AMPMOD computing environment. It gives emphasis to the generalized International Business Machines, Inc. (IBM) standard of connectivity because of the predominance of this vendor in the AMPMOD computing environment. The fourth section discusses two test cases as possible solutions for file transfer. The first solution used is the IBM 3270 Control Program telecommunications and terminal emulation software. A version of this software was available on all the IBM Tempest Personal Computer 3s. The second solution used is Distributed Office Support System host electronic mail software with Personal Services/Personal Computer microcomputer e-mail software running with IBM 3270 Workstation Program for terminal emulation. Test conditions and results are presented for both test cases. The fifth section provides a summary of findings for the two possible solutions tested for AMPMOD file transfer. The report concludes with observations on current AMPMOD understanding of file transfer and includes recommendations for future consideration by the sponsor.

  12. Nuclear structure studies. [Dept. of Chemistry, Univ. of Maryland

    SciTech Connect (OSTI)

    Walters, W.B.

    1992-08-31

    New results are reported for the decay and nuclear orientation of [sup 114,116]I and [sup 114]Sb as well as data for the structure of daughter nuclides [sup 114,116]Te. New results for IBM-2 calculations for the structure of [sup 126]Xe are also reported. A new approach to the problem of the underproduction of A = 120 nuclides in the astrophysical r-process is reported.

  13. 1

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    supercomputer remains fastest in world November 18, 2008 New TOP500 list is announced IBM/LANL Roadrunner hybrid supercomputer still #1 LOS ALAMOS, New Mexico, November 18, 2008 -The latest list of the TOP500 computers in the world has been announced at the SC08 supercomputing conference in Austin, Texas, and continued to place the Roadrunner supercomputer at Los Alamos National Laboratory as fastest in the world running the LINPACK benchmark-the industry standard for measuring sustained

  14. Intrepid/Challenger/Surveyor | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Intrepid/Challenger/Surveyor The ALCF houses several several IBM Blue Gene/P supercomputers, one of the world's fastest computing platforms. Intrepid Intrepid has a highly scalable torus network, as well as a high-performance collective network that minimizes the bottlenecks common in simulations on large, parallel computers. Intrepid uses less

  15. LANSCE | Lujan Center | Data Management

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Lujan Center Data Management Lujan Neutron Scattering Center Logo The Lujan Center within LANSCE utilizes a pulsed source and has a complement of 15 instruments. It maintains a data archive of approximately 4 TB that includes all neutron scattering data collected since it came on line in 1986. Data gathered at the Lujan Center are now archived using the IBM Tivoli Storage System. No Personal information shall be stored with the data other than the User's home institution and institutional

  16. Large Eddy Simulation of Two-Phase Flow Combustion in Gas Turbines |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Argonne Leadership Computing Facility Fields of temperature and pressure in a simulation of a complete helicopter combustion chamber performed on the IBM Blue Gene/P at the ALCF (July 2010). Large Eddy Simulation of Two-Phase Flow Combustion in Gas Turbines PI Name: Thierry Poinsot PI Email: poinsot@cerfacs.fr Institution: CERFACS Allocation Program: INCITE Allocation Hours at ALCF: 8 Million Year: 2010 Research Domain: Chemistry The increase of computer power has allowed science to make

  17. Bradbury Museum's supercomputing exhibit gets updated

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Bradbury's supercomputing exhibit gets updated Bradbury Museum's supercomputing exhibit gets updated The updated exhibit interactive displays, artifacts from early computers, vacuum tubes from the MANIAC computer, and unique IBM cell blades from Roadrunner. May 19, 2011 Bradbury Science Museum Bradbury Science Museum Contact Communications Office (505) 667-7000 LOS ALAMOS, New Mexico, May 19, 2011-For decades, Los Alamos National Laboratory has been synonymous with supercomputing, achieving a

  18. gdb | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] gdb Using gdb Preliminaries You should prepare a debug version of your code: Compile using -O0 -g If you are using the XL

  19. predictive-models | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    predictive-models DOE/BC-88/1/SP. EOR Predictive Models: Handbook for Personal Computer Versions of Enhanced Oil Recovery Predictive Models. BPO Staff. February 1988. 76 pp. NTIS Order No. DE89001204. FORTRAN source code and executable programs for the five EOR Predictive Models shown below are available. The five recovery processes modeled are Steamflood, In-Situ Combustion, Polymer, Chemical Flooding, and CO2 Miscible Flooding. The models are available individually. Min Req.: IBM PC/XT, PS-2,

  20. Allinea DDT | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Allinea DDT References Allinea DDT Website Allinea DDT User Guide Availability You can use Allinea DDT to debug up to full

  1. Automatic Performance Collection (AutoPerf) | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Automatic

  2. Recent theoretical results for 0νββ-decay including R0νECEC and 0νββM

    SciTech Connect (OSTI)

    Kotila, J.; Barea, J.; Iachello, F.

    2015-10-28

    The most recent (2015) results for 0νββ nuclear matrix elements in the interacting boson model (IBM-2) with light and heavy neutrino exchange, including R0νECEC, are given for all nuclei of interest from {sup 48}Ca to {sup 238}U. Predictions for half-lives and limits for average neutrino mass are also made. Possible additional scenarios, such as Majoron emission, is also discussed.

  3. QBox | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] QBox What is Qbox? Qbox is a C++/MPI scalable parallel implementation of first-principles molecular dynamics (FPMD) based on the plane-wave, pseudopotential

  4. Software and Libraries | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System Overview Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Software and Libraries Expand All Close All Mira/Cetus Vesta

  5. Scalable computations in penetration mechanics

    SciTech Connect (OSTI)

    Kimsey, K.D.; Schraml, S.J.; Hertel, E.S.

    1998-01-01

    This paper presents an overview of an explicit message passing paradigm for an Eulerian finite volume method for modeling solid dynamics problems involving shock wave propagation, multiple materials, and large deformations. Three-dimensional simulations of high-velocity impact were conducted on the IBM SP2, the SGI Power challenge Array, and the SGI Origin 2000. The scalability of the message-passing code on distributed-memory and symmetric multiprocessor architectures is presented and compared to the ideal linear performance.

  6. EIA - Energy Conferences & Presentations.

    U.S. Energy Information Administration (EIA) Indexed Site

    8 EIA Conference 2010 Session 8: Smart Grid: Impacts on Electric Power Supply and Demand Moderator: Eric M. Lightner, DOE Speakers: William M. Gausman, Pepco Holdings Christian Grant, Booz & Company, Inc. Michael Valocchi, IBM Global Business Services Moderator and Speaker Biographies Eric M. Lightner, DOE Eric M. Lightner has worked as a program manager for advanced technology development at the U.S. Department of Energy for the last 20 years. Currently, Mr. Lightner is the Director of the

  7. Performing three-dimensional neutral particle transport calculations on tera scale computers

    SciTech Connect (OSTI)

    Woodward, C S; Brown, P N; Chang, B; Dorr, M R; Hanebutte, U R

    1999-01-12

    A scalable, parallel code system to perform neutral particle transport calculations in three dimensions is presented. To utilize the hyper-cluster architecture of emerging tera scale computers, the parallel code successfully combines the MPI message passing and paradigms. The code's capabilities are demonstrated by a shielding calculation containing over 14 billion unknowns. This calculation was accomplished on the IBM SP ''ASCI-Blue-Pacific computer located at Lawrence Livermore National Laboratory (LLNL).

  8. WA_00_015_COMPAQ_FEDERAL_LLC_Waiver_Domestic_and_Foreign_Pat.pdf |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy 15_COMPAQ_FEDERAL_LLC_Waiver_Domestic_and_Foreign_Pat.pdf WA_00_015_COMPAQ_FEDERAL_LLC_Waiver_Domestic_and_Foreign_Pat.pdf (1.8 MB) More Documents & Publications WA_01_018_IBM_Waiver_of_Governement_US_and_Foreign_Patent_Ri.pdf Advance Patent Waiver W(A)2002-023 WC_1997_004_CLASS_ADVANCE_WAIVER_Under_Domestic_First_and_Se

  9. ALSNews Vol. 350

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    50 Print High-Pressure MOF Research Yields Structural Insights 285 thumb mofs Metal-organic frameworks have shown promise in a variety of applications ranging from gas storage to ion exchange. Accurate structural knowledge is key to the understanding of the applicability of these materials; to learn more, researchers used ALS Beamline 11.3.1 to perform in situ, high-pressure, single-crystal x-ray diffraction. Read more... Contact: Kevin Gagnon Industry @ ALS: IBM Probes Unique Material

  10. ALSNews Vol. 350

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    50 Print High-Pressure MOF Research Yields Structural Insights 285 thumb mofs Metal-organic frameworks have shown promise in a variety of applications ranging from gas storage to ion exchange. Accurate structural knowledge is key to the understanding of the applicability of these materials; to learn more, researchers used ALS Beamline 11.3.1 to perform in situ, high-pressure, single-crystal x-ray diffraction. Read more... Contact: Kevin Gagnon Industry @ ALS: IBM Probes Unique Material

  11. ALSNews Vol. 350

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ALSNews Vol. 350 Print High-Pressure MOF Research Yields Structural Insights 285 thumb mofs Metal-organic frameworks have shown promise in a variety of applications ranging from gas storage to ion exchange. Accurate structural knowledge is key to the understanding of the applicability of these materials; to learn more, researchers used ALS Beamline 11.3.1 to perform in situ, high-pressure, single-crystal x-ray diffraction. Read more... Contact: Kevin Gagnon Industry @ ALS: IBM Probes Unique

  12. ALSNews Vol. 350

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ALSNews Vol. 350 Print High-Pressure MOF Research Yields Structural Insights 285 thumb mofs Metal-organic frameworks have shown promise in a variety of applications ranging from gas storage to ion exchange. Accurate structural knowledge is key to the understanding of the applicability of these materials; to learn more, researchers used ALS Beamline 11.3.1 to perform in situ, high-pressure, single-crystal x-ray diffraction. Read more... Contact: Kevin Gagnon Industry @ ALS: IBM Probes Unique

  13. Speakers: Eric M. Lightner, U.S. Department of Energy William M. Gausman, Pepco Holdings, Inc.

    U.S. Energy Information Administration (EIA) Indexed Site

    8: "Smart Grid: Impacts on Electric Power Supply and Demand" Speakers: Eric M. Lightner, U.S. Department of Energy William M. Gausman, Pepco Holdings, Inc. Christian Grant, Booz & Company, Inc. F. Michael Valocchi, IBM Global Business Services [Note: Recorders did not pick up introduction of panel (see biographies for details on the panelists) or introduction of session.] Eric Lightner: Well, good morning, everybody. My name is Eric Lightner. I work at the U.S. Department of

  14. 2009 CNM Users Meeting | Argonne National Laboratory

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    9 CNM Users Meeting October 5-7, 2009 Full Information Available Here Meeting Summary Plenary Session Views from DOE and Washington Keynote Presentations Stephen Chou (Princeton University), "Nanostructure Engineering: A Path to Discovery and Innovation" Andreas Heinrich (IBM Almaden Research Center), "The Quantum Properties of Magnetic Nanostructures on Surfaces" User Science Highlights Focus Sessions Nanostructured Materials for Solar Energy Utilization Materials and

  15. Blue Gene/Q Versus Blue Gene/P | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System Overview Blue Gene/Q Versus Blue Gene/P BG/Q Drivers Status Machine Overview Machine Partitions Torus Network Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Blue Gene/Q Versus Blue

  16. Cobalt Job Control | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Reservations Cobalt Job Control How to Queue a Job Running Jobs FAQs Queuing and Running on BG/Q Systems Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Cobalt Job Control The queuing system used at ALCF is Cobalt. Cobalt has two ways to queue a run: the basic method and

  17. Code_Saturne | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Code_Saturne What is Code_Saturne? Code_Saturne is general-purpose Computational Fluid Dynamics (CFD) software of Électricité de France (EDF), one of the

  18. Compiling and Linking FAQ | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Compiling and Linking FAQ Contents Where do I find

  19. Coreprocessor | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Coreprocessor Coreprocessor is a basic parallel debugging tool that can be used to debug problems at all levels (hardware, kernel, and application). It is particularly useful when working with a large set of core files since it reveals where processors aborted, grouping them together automatically (for example, 9 died here, 500 were here, etc.). See the instructions below for using the Coreprocessor tool. References The Coreprocessor tool (IBM System Blue Gene Solution: Blue Gene/Q System

  20. Design and Fabrication of a Radiation-Hard 500-MHz Digitizer Using Deep Submicron Technology

    SciTech Connect (OSTI)

    K.K. Gan; M.O. Johnson; R.D. Kass; J. Moore

    2008-09-12

    The proposed International Linear Collider (ILC) will use tens of thousands of beam position monitors (BPMs) for precise beam alignment. The signal from each BPM is digitized and processed for feedback control. We proposed the development of an 11-bit (effective) digitizer with 500 MHz bandwidth and 2 G samples/s. The digitizer was somewhat beyond the state-of-the-art. Moreover we planned to design the digitizer chip using the deep-submicron technology with custom transistors that had proven to be very radiation hard (up to at least 60 Mrad). The design mitigated the need for costly shielding and long cables while providing ready access to the electronics for testing and maintenance. In FY06 as we prepared to submit a chip with test circuits and a partial ADC circuit we found that IBM had changed the availability of our chosen IC fabrication process (IBM 6HP SiGe BiCMOS), making it unaffordable for us, at roughly 3 times the previous price. This prompted us to change our design to the IBM 5HPE process with 0.35 µm feature size. We requested funding for FY07 to continue the design work and submit the first prototype chip. Unfortunately, the funding was not continued and we will summarize below the work accomplished so far.

  1. Validation of nuclear criticality safety software and 27 energy group ENDF/B-IV cross sections. Revision 1

    SciTech Connect (OSTI)

    Lee, B.L. Jr.; D`Aquila, D.M.

    1996-01-01

    The original validation report, POEF-T-3636, was documented in August 1994. The document was based on calculations that were executed during June through August 1992. The statistical analyses in Appendix C and Appendix D were completed in October 1993. This revision is written to clarify the margin of safety being used at Portsmouth for nuclear criticality safety calculations. This validation gives Portsmouth NCS personnel a basis for performing computerized KENO V.a calculations using the Lockheed Martin Nuclear Criticality Safety Software. The first portion of the document outlines basic information in regard to validation of NCSS using ENDF/B-IV 27-group cross sections on the IBM3090 at ORNL. A basic discussion of the NCSS system is provided, some discussion on the validation database and validation in general. Then follows a detailed description of the statistical analysis which was applied. The results of this validation indicate that the NCSS software may be used with confidence for criticality calculations at the Portsmouth Gaseous Diffusion Plant. For calculations of Portsmouth systems using the specified codes and systems covered by this validation, a maximum k{sub eff} including 2{sigma} of 0.9605 or lower shall be considered as subcritical to ensure a calculational margin of safety of 0.02. The validation of NCSS on the IBM 3090 at ORNL was extended to include NCSS on the IBM 3090 at K-25.

  2. A Fault-Oblivious Extreme-Scale Execution Environment (FOX)

    SciTech Connect (OSTI)

    Van Hensbergen, Eric; Speight, William; Xenidis, Jimi

    2013-03-15

    IBM Research’s contribution to the Fault Oblivious Extreme-scale Execution Environment (FOX) revolved around three core research deliverables: ● collaboration with Boston University around the Kittyhawk cloud infrastructure which both enabled a development and deployment platform for the project team and provided a fault-injection testbed to evaluate prototypes ● operating systems research focused on exploring role-based operating system technologies through collaboration with Sandia National Labs on the NIX research operating system and collaboration with the broader IBM Research community around a hybrid operating system model which became known as FusedOS ● IBM Research also participated in an advisory capacity with the Boston University SESA project, the core of which was derived from the K42 operating system research project funded in part by DARPA’s HPCS program. Both of these contributions were built on a foundation of previous operating systems research funding by the Department of Energy’s FastOS Program. Through the course of the X-stack funding we were able to develop prototypes, deploy them on production clusters at scale, and make them available to other researchers. As newer hardware, in the form of BlueGene/Q, came online, we were able to port the prototypes to the new hardware and release the source code for the resulting prototypes as open source to the community. In addition to the open source coded for the Kittyhawk and NIX prototypes, we were able to bring the BlueGene/Q Linux patches up to a more recent kernel and contribute them for inclusion by the broader Linux community. The lasting impact of the IBM Research work on FOX can be seen in its effect on the shift of IBM’s approach to HPC operating systems from Linux and Compute Node Kernels to role-based approaches as prototyped by the NIX and FusedOS work. This impact can be seen beyond IBM in follow-on ideas being incorporated into the proposals for the Exasacale Operating

  3. Development of an Immersed Boundary Method to Resolve Complex Terrain in the Weather Research and Forecasting Model

    SciTech Connect (OSTI)

    Lunquist, K A; Chow, F K; Lundquist, J K; Mirocha, J D

    2007-09-04

    simulations, on the other hand, are performed by numerical weather prediction (NWP) codes, which cannot handle the geometry of the urban landscape, but do provide a more complete representation of atmospheric physics. NWP codes typically use structured grids with terrain-following vertical coordinates, include a full suite of atmospheric physics parameterizations, and allow for dynamic synoptic scale lateral forcing through grid nesting. Terrain following grids are unsuitable for urban terrain, as steep terrain gradients cause extreme distortion of the computational cells. In this work, we introduce and develop an immersed boundary method (IBM) to allow the favorable properties of a numerical weather prediction code to be combined with the ability to handle complex terrain. IBM uses a non-conforming structured grid, and allows solid boundaries to pass through the computational cells. As the terrain passes through the mesh in an arbitrary manner, the main goal of the IBM is to apply the boundary condition on the interior of the domain as accurately as possible. With the implementation of the IBM, numerical weather prediction codes can be used to explicitly resolve urban terrain. Heterogeneous urban domains using the IBM can be nested into larger mesoscale domains using a terrain-following coordinate. The larger mesoscale domain provides lateral boundary conditions to the urban domain with the correct forcing, allowing seamless integration between mesoscale and urban scale models. Further discussion of the scope of this project is given by Lundquist et al. [2007]. The current paper describes the implementation of an IBM into the Weather Research and Forecasting (WRF) model, which is an open source numerical weather prediction code. The WRF model solves the non-hydrostatic compressible Navier-Stokes equations, and employs an isobaric terrain-following vertical coordinate. Many types of IB methods have been developed by researchers; a comprehensive review can be found in Mittal

  4. NUG Meeting February 22, 2001

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NUG Meeting February 22, 2001 Dates February 22, 2001 Location NERSC's Oakland Scientific Facility 415 20th St. [MAP] Oakland CA, 94612 NERSC's Web Site Presentations Agenda Thursday, February 22 8:00 - 8:30 Pastries and coffee available 8:30 - 8:45 Rob Ryne Introductions 8:45 - 9:30 Walt Polansky Perspectives from Washington 9:30 - 10:30 Bill Kramer Status reports: IBM SP Phase 2 plans, NERSC-4 plans, NERSC-2 decommissioning 10:30 - ... Read More » Photos Notes for Greenbook Process W.

  5. Featured Announcements

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    April 2013 2014 INCITE Call for Proposals - Due June 28 April 30, 2013 by Francesca Verdier The 2014 INCITE Call for Proposals is now open. Open to researchers from academia, government labs, and industry, the INCITE Program is the major means by which the scientific community gains access to the Leadership Computing Facilities' resources. INCITE is currently soliciting proposals for research on the 27-petaflops Cray XK7 "Titan" and the 10-petaflops IBM Blue Gene/Q "Mira"

  6. GROMACS | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    GRC ANNUAL MEETING & GEA GEOEXPO+ GRC ANNUAL MEETING & GEA GEOEXPO+ October 23, 2016 9:00AM EDT to October 26, 2016 5:00PM EDT GRC Annual Meeting & GEA GEOEXPO+ October 23-26, Sacramento, California, USA GRC http://www.geothermal.org/meet-new.html GEA http://www.geothermalexpo.org

    Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to

  7. Interactive nuclear plant analyzer for VVER-440 reactor

    SciTech Connect (OSTI)

    Shier, W.; Horak, W.; Kennett, R.

    1992-05-01

    This document discusses an interactive nuclear plant analyzer (NPA) which has been developed for a VVER-440, Model 213 reactor for use in the training of plant personnel, the development and verification of plant operating procedures, and in the analysis of various anticipated operational occurrences and accident scenarios. This NPA is operational on an IBM RISC-6000 workstation and utilizes the RELAP5/MOD2 computer code for the calculation of the VVER-440 reactor response to the interactive commands initiated by the NPA operator.

  8. Simulation analysis of within-day flow fluctuation effects on trout below flaming Gorge Dam.

    SciTech Connect (OSTI)

    Railsback, S. F.; Hayse, J. W.; LaGory, K. E.; Environmental Science Division; EPRI

    2006-01-01

    In addition to being renewable, hydropower has the advantage of allowing rapid load-following, in that the generation rate can easily be varied within a day to match the demand for power. However, the flow fluctuations that result from load-following can be controversial, in part because they may affect downstream fish populations. At Flaming Gorge Dam, located on the Green River in northeastern Utah, concern has been raised about whether flow fluctuations caused by the dam disrupt feeding at a tailwater trout fishery, as fish move in response to flow changes and as the flow changes alter the amount or timing of the invertebrate drift that trout feed on. Western Area Power Administration (Western), which controls power production on submonthly time scales, has made several operational changes to address concerns about flow fluctuation effects on fisheries. These changes include reducing the number of daily flow peaks from two to one and operating within a restricted range of flows. These changes significantly reduce the value of the power produced at Flaming Gorge Dam and put higher load-following pressure on other power plants. Consequently, Western has great interest in understanding what benefits these restrictions provide to the fishery and whether adjusting the restrictions could provide a better tradeoff between power and non-power concerns. Directly evaluating the effects of flow fluctuations on fish populations is unfortunately difficult. Effects are expected to be relatively small, so tightly controlled experiments with large sample sizes and long study durations would be needed to evaluate them. Such experiments would be extremely expensive and would be subject to the confounding effects of uncontrollable variations in factors such as runoff and weather. Computer simulation using individual-based models (IBMs) is an alternative study approach for ecological problems that are not amenable to analysis using field studies alone. An IBM simulates how a

  9. Early Site Permit Demonstration Program: Nuclear Power Plant Siting Database

    Energy Science and Technology Software Center (OSTI)

    1994-01-28

    This database is a repository of comprehensive licensing and technical reviews of siting regulatory processes and acceptance criteria for advanced light water reactor (ALWR) nuclear power plants. The program is designed to be used by applicants for an early site permit or combined construction permit/operating license (10CFRR522, Subparts A and C) as input for the development of the application. The database is a complete, menu-driven, self-contained package that can search and sort the supplied datamore » by topic, keyword, or other input. The software is designed for operation on IBM compatible computers with DOS.« less

  10. ARPA-E's 19 New Projects Focus on Battery Management and Storage |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy E's 19 New Projects Focus on Battery Management and Storage ARPA-E's 19 New Projects Focus on Battery Management and Storage August 7, 2012 - 1:17pm Addthis Principal Deputy Director Eric Toone, former ARPA-E Director Arun Majumdar, the Honorable Bart Gordon and IBM Research Senior Director Kathleen Kingscott discuss the future of energy innovation at an ITIF event on August 2. | Energy Department photo. Principal Deputy Director Eric Toone, former ARPA-E Director Arun

  11. ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual

    SciTech Connect (OSTI)

    Smith, A.B.; Lawson, R.D.

    1998-06-01

    The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.

  12. Machine Overview | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Overview Blue Gene/Q systems are composed of login nodes, I/O nodes, and compute nodes. Login Nodes Login and compile nodes are IBM Power 7-based systems running Red Hat Linux and are the user's interface to a Blue Gene/Q system. This is where users login, edit files, compile, and submit jobs. These are shared resources with multiple users. I/O Nodes The I/O node and compute environments are based around a very simple 1.6 GHz 16 core PowerPC A2 system with 16 GB of RAM. I/O node environments are

  13. Market Evolution: Wholesale Electricity Market Design for 21st Century Power Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    1stCenturyPower.org Technical Report NREL/TP-6A20-57477 October 2013 Contract No. DE-AC36-08GO28308 Market Evolution: Wholesale Electricity Market Design for 21 st Century Power Systems Jaquelin Cochran, Mackay Miller, Michael Milligan, Erik Ela, Douglas Arent, and Aaron Bloom National Renewable Energy Laboratory Matthew Futch IBM Juha Kiviluoma and Hannele Holtinnen VTT Technical Research Centre of Finland Antje Orths Energinet.dk Emilio Gómez-Lázaro and Sergio Martín-Martínez Universidad

  14. Agenda

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Agenda Agenda NUG Meeting: June 5-6, 2000 Garden Plaza Hotel, Oak Ridge, TN The next NERSC User Group meeting will be held in Oak Ridge, TN, June 5-7 and will be hosted by Oak Ridge National Laboratory (ORNL). See the agenda, below. The meeting will be all day Monday, June 5, and is expected to finish Tuesday, June 6, at lunchtime. Following this business meeting will be a training class on the new IBM SP in conjunction with Users Helping Users (UHU) talks and discussions with the consultants.

  15. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Presentations Presentations Sort by: Default | Name | Date (low-high) | Date (high-low) | Source | Category Perspectives from Washington February 22, 2001 | Author(s): Walt Polansky | Download File: Polansky.NUGMeeting2-01.ppt | ppt | 750 KB Status reports: IBM SP Phase 2 plans, NERSC-4 plans, NERSC-2 decommissioning February 22, 2001 | Author(s): Bill Kramer | Download File: Kramer.Status.Plans.Feb2001.ppt | ppt | 6.8 MB Goals for the next Greenbook February 22, 2001 | Author(s): Doug Rotman |

  16. Unconventional Architectures for High-Throughput Sciences

    SciTech Connect (OSTI)

    Nieplocha, Jarek; Marquez, Andres; Petrini, Fabrizio; Chavarría-Miranda, Daniel

    2007-06-15

    Science laboratories and sophisticated simulations are producing data of increasing volumes and complexities, and that’s posing significant challenges to current data infrastructures as terabytes to petabytes of data must be processed and analyzed. Traditional computing platforms, originally designed to support model-driven applications, are unable to meet the demands of the data-intensive scientific applications. Pacific Northwest National Laboratory (PNNL) research goes beyond “traditional supercomputing” applications to address emerging problems that need scalable, real-time solutions. The outcome is new unconventional architectures for data-intensive applications specifically designed to process the deluge of scientific data, including FPGAs, multithreaded architectures and IBM's Cell.

  17. Hopper:Improving I/O performance to GSCRATCH and PROJECT

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    GSCRATCH/PROJECT Performance Tuning on Hopper Hopper:Improving I/O performance to GSCRATCH and PROJECT What are GSCRATCH/PROJECT? GSCRATCH and PROJECT are two file systems at NERSC that one can access on most computational systems. They are both based on the IBM GPFS file system and have multiple racks of dedicated servers and disk arrays. How are GSCRATCH/PROJECT connected to Hopper? As shown in the figure below, GSCRATCH and PROJECT are each connected to several Private NSD Servers (PNSD; for

  18. Dose commitments due to radioactive releases from nuclear power plant sites: Methodology and data base. Supplement 1

    SciTech Connect (OSTI)

    Baker, D.A.

    1996-06-01

    This manual describes a dose assessment system used to estimate the population or collective dose commitments received via both airborne and waterborne pathways by persons living within a 2- to 80-kilometer region of a commercial operating power reactor for a specific year of effluent releases. Computer programs, data files, and utility routines are included which can be used in conjunction with an IBM or compatible personal computer to produce the required dose commitments and their statistical distributions. In addition, maximum individual airborne and waterborne dose commitments are estimated and compared to 10 CFR Part 50, Appendix 1, design objectives. This supplement is the last report in the NUREG/CR-2850 series.

  19. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema (OSTI)

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2014-06-05

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  20. EIA directory of electronic products. Third quarter 1994

    SciTech Connect (OSTI)

    Not Available

    1994-09-01

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers. EIA, as the independent statistical and analytical branch of the Department of Energy, provides assistance to the general public through the National Energy Information Center (NEIC). Inquirers may telephone NEIC`s information specialists at (202) 586-8800 with any data questions relating to the content of EIA Directory of Electronic Products.

  1. EIA directory of electronic products, first quarter 1995

    SciTech Connect (OSTI)

    1995-06-01

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers. EIA, as the independent statistical and analytical branch of the Department of Energy, provides assistance to the general public through the National Energy Information Center (NEIC). For each product listed in this directory, a detailed abstract is provided which describes the data published. Specific technical questions may be referred to the appropriate contact person.

  2. Cetus and Vesta | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Cetus Cetus and Vesta Cetus Cetus shares the same software environment and file systems as Mira. The primary role of Cetus is to run small jobs in order to debug problems that occurred on Mira. Cetus System Configuration Architecture: IBM BG/Q Processor: 16 1600 MHz PowerPC A2 cores Cabinets: 4 Nodes: 4,096 Cores/node: 16 Total cores: 65,536 cores Memory/node: 16 GB RAM per node Memory/core: 1 GB

  3. Bridging the Gap to 64-bit Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Opteron and AMD64 A Commodity 64 bit x86 SOC Fred Weber Vice President and CTO Computation Products Group Advanced Micro Devices 22 April 2003 AMD - Salishan HPC 2003 2 Opteron/AMD64 Launch - Today! * Official Launch of AMD64 architecture and Production Server/Workstation CPUs - Series 200 (2P) available today - Series 800 (4P+) available later in Q2 * Oracle, IBM-DB2, Microsoft, RedHat, SuSe software support - And many others * Dozens of server system vendors - System builder availability this

  4. Chippewa Falls

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    years... & When did it all begin? 2 1974? 1978? 1963? CDC 6600 - 1974 NERSC started service with the first Supercomputer... 3 A well-used system - Serial Number 1 ● On its last legs... Designed and built in Chippewa Falls Launch Date: 1963 Load / Store Architecture ● First RISC Computer! First CRT Monitor Freon Cooled State-of-the-Art Remote Access at NERSC ● Via 4 acoustic modems, manually answered capable of 10 characters /sec 50 th Anniversary of the IBM / Cray Rivalry... 2/6/14

  5. ORNL Cray X1 evaluation status report

    SciTech Connect (OSTI)

    Agarwal, P.K.; Alexander, R.A.; Apra, E.; Balay, S.; Bland, A.S; Colgan, J.; D'Azevedo, E.F.; Dongarra, J.J.; Dunigan Jr., T.H.; Fahey, M.R.; Fahey, R.A.; Geist, A.; Gordon, M.; Harrison, R.J.; Kaushik, D.; Krishnakumar, M.; Luszczek, P.; Mezzacappa, A.; Nichols, J.A.; Nieplocha, J.; Oliker, L.; Packwood, T.; Pindzola, M.S.; Schulthess, T.C.; Vetter, J.S.; White III, J.B.; Windus, T.L.; Worley, P.H.; Zacharia, T.

    2004-05-01

    On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of the architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the phenomenon of

  6. Performance assessment of OTEC power systems and thermal power plants. Final report. Volume I

    SciTech Connect (OSTI)

    Leidenfrost, W.; Liley, P.E.; McDonald, A.T.; Mudawwar, I.; Pearson, J.T.

    1985-05-01

    The focus of this report is on closed-cycle ocean thermal energy conversion (OTEC) power systems under research at Purdue University. The working operations of an OTEC power plant are briefly discussed. Methods of improving the performance of OTEC power systems are presented. Brief discussions on the methods of heat exchanger analysis and design are provided, as are the thermophysical properties of the working fluids and seawater. An interactive code capable of analyzing OTEC power system performance is included for use with an IBM personal computer.

  7. Scott Burrow

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scott Burrow Scott Burrow staffportrait Scott Burrow csburrow@lbl.gov Phone: (510) 486-4313 Fax: (510) 486-4316 Computational Systems Group 1 Cyclotron Road Mail Stop 943-256 Berkeley, CA 94720 Scott Burrow is a system administrator in the Computational Systems Group. Scott is currently the system lead for Carver's testbed and a backup for Carver. Prior to that Scott worked in commercial and high performance computing. Scott served as an IBM consultant on-site for NASA Ames from 2007-2009 and at

  8. bgclang Compiler | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Projects bgclang Compiler Cobalt Scheduler GLEAN Petrel Swift bgclang Compiler bgclang, a compiler toolchain based on the LLVM/Clang compiler infrastructure, but customized for the IBM Blue Gene/Q (BG/Q) supercomputer, is a successful experiment in creating an alternative, high-quality compiler toolchain for non-commodity HPC hardware. By enhancing LLVM (http://llvm.org/) with support for the BG/Q's QPX vector instruction set, bgclang inherits from LLVM/Clang a high-quality auto-vectorizing

  9. ISC2005v2.ppt

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Supercomputing: The Top Three Breakthroughs of the Last 20 Years and the Top Three Challenges for the Next 20 Years Horst Simon Associate Laboratory Director Lawrence Berkeley National Laboratory ISC 2005 Heidelberg June 22, 2005 Signpost System 1985 Cray-2 * 244 MHz (4.1 nsec) * 4 processors * 1.95 Gflop/s peak * 2 GB memory (256 MW) * 1.2 Gflop/s LINPACK R_max * 1.6 m 2 floor space * 0.2 MW power Signpost System in 2005 IBM BG/L @ LLNL * 700 MHz (x 2.86) * 65,536 nodes (x 16,384) * 180 (360)

  10. simulators | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    simulators DOE/BC-89/3/SP. Handbook for Personal Computer Version of BOAST II: A Three- Dimensional, Three-Phase Black Oil Applied Simulation Tool. Bartlesville Project Office. January 1989. 82 pp. NTIS Order No. DE89000725. FORTRAN source code and executable program. Min. Req.: IBM PC/AT, PS-2, or compatible computer with 640 Kbytes of memory. Download 464 KB Manual 75 KB Manual 404 KB Reference paper (1033-3,v1) by Fanchi, et al. Manual 83 KB Reference paper (1033-3,v2) by Fanchi, et al. BOAST

  11. Experiences from the Roadrunner petascale hybrid systems

    SciTech Connect (OSTI)

    Kerbyson, Darren J; Pakin, Scott; Lang, Mike; Sancho Pitarch, Jose C; Davis, Kei; Barker, Kevin J; Peraza, Josh

    2010-01-01

    The combination of flexible microprocessors (AMD Opterons) with high-performing accelerators (IBM PowerXCell 8i) resulted in the extremely powerful Roadrunner system. Many challenges in both hardware and software were overcome to achieve its goals. In this talk we detail some of the experiences in achieving performance on the Roadrunner system. In particular we examine several implementations of the kernel application, Sweep3D, using a work-queue approach, a more portable Thread-building-blocks approach, and an MPI on the accelerator approach.

  12. (High T sub c superconductivity)

    SciTech Connect (OSTI)

    Rasolt, M.

    1990-10-02

    A detailed description of the research conducted at the University of Paris at Orsay and the International Meeting on High-{Tc} Superconductivity, organized by the traveler, H. Schultz from Orsay, and D. M. Newns from IBM, is presented. Particular emphasis is placed on the collaboration with F. Perrot of the Centre Europeen de Calcul Atomique et Moleculaire. In addition, descriptions of the different scientific interactions and information obtained and implications of this scientific exchange on the research conducted in the Solid State Division of ORNL are made.

  13. Computers for artificial intelligence a technology assessment and forecast

    SciTech Connect (OSTI)

    Miller, R.K.

    1986-01-01

    This study reviews the development and current state-of-the-art in computers for artificial intelligence, including LISP machines, AI workstations, professional and engineering workstations, minicomputers, mainframes, and supercomputers. Major computer systems for AI applications are reviewed. The use of personal computers for expert system development is discussed, and AI software for the IBM PC, Texas Instrument Professional Computer, and Apple MacIntosh is presented. Current research aimed at developing a new computer for artificial intelligence is described, and future technological developments are discussed.

  14. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Introduction to the NERSC HPCF (High Performance Computing Facilities) June 7, 2000 | Author(s): Thomas M. DeBoni | Download File: IntroTalk.ppt | ppt | 228 KB This talk will briefly introduce the NERSC hardware and software of the computational systems, mass storage systems, and auxiliary servers. It will also touch on matters of usage, access, and information sources. The intent is to establish a baseline of knowledge for all attendees. The IBM SP, Evolution from Phase I to Phase II June 7,

  15. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NERSC "Visualization Greenbook": Future Visualization Needs of the DOE Computational Science Community Hosted at NERSC October 1, 2002 | Author(s): Bernd Hamann, E. Wes Bethel, Horst Simon, Juan Meza | Download File: VisGreenFindings-LBNL-51699.pdf | pdf | 2.8 MB This report presents the findings and recommendations that emerged from a one-day workshop held at Lawrence Berkeley National Laboratory (LBNL) on June 5, 2002, in conjunction with the NERSC User Group (NUG) Meeting. IBM

  16. Performance analysis of parallel supernodal sparse LU factorization

    SciTech Connect (OSTI)

    Grigori, Laura; Li, Xiaoye S.

    2004-02-05

    We investigate performance characteristics for the LU factorization of large matrices with various sparsity patterns. We consider supernodal right-looking parallel factorization on a bi-dimensional grid of processors, making use of static pivoting. We develop a performance model and we validate it using the implementation in SuperLU-DIST, the real matrices and the IBM Power3 machine at NERSC. We use this model to obtain performance bounds on parallel computers, to perform scalability analysis and to identify performance bottlenecks. We also discuss the role of load balance and data distribution in this approach.

  17. ALCF Early Science Program Tim Williams (ESP Manager) HPC User Forum

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Early Science Program Tim Williams (ESP Manager) HPC User Forum April 15, 2015 Argonne Leadership Computing Facility 2 ¤ Mission: capability computing (leadership-class) INCITE ALCC Director's D iscre2onary Company University Govt. L ab Foreign USA DOE NSF NIST Company Research F unding S ource ... Production Systems (ALCF-2) 3 Mira - IBM Blue Gene/Q system ¥ 49,152 nodes / 786,432 cores ¥ PowerPC A2 cpu - 16 cores, 4 HW threads/core ¥ 786 TB of memory ¥ Peak flop rate: 10 PF

  18. Cray XE6 Workshop

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Cray XE6 Workshop February 7-8, 2011 Outline * About OpenMP * Parallel Regions * Using OpenMP on Hopper * Worksharing Constructs * Synchronization * Data Scope * Tasks * Hands-on Exercises 2 What is OpenMP * OpenMP is an industry standard API of C/C++ and Fortran for shared memory parallel programming. - OpenMP Architecture Review Board * Major compiler vendors: PGI, Cray, Intel, Oracle, HP, Fujitsu, Microsoft, AMD, IBM, NEC, Texas Instrument, ... * Research institutions: cOMPunity, DOE/NASA

  19. BREEDER: a microcomputer program for financial analysis of a large-scale prototype breeder reactor

    SciTech Connect (OSTI)

    Giese, R.F.

    1984-04-01

    This report describes a microcomputer-based, single-project financial analysis program: BREEDER. BREEDER is a user-friendly model designed to facilitate frequent and rapid analyses of the financial implications associated with alternative design and financing strategies for electric generating plants and large-scale prototype breeder (LSPB) reactors in particular. The model has proved to be a useful tool in establishing cost goals for LSPB reactors. The program is available on floppy disks for use on an IBM personal computer (or IBM look-a-like) running under PC-DOS or a Kaypro II transportable computer running under CP/M (and many other CP/M machines). The report documents version 1.5 of BREEDER and contains a user's guide. The report also includes a general overview of BREEDER, a summary of hardware requirements, a definition of all required program inputs, a description of all algorithms used in performing the construction-period and operation-period analyses, and a summary of all available reports. The appendixes contain a complete source-code listing, a cross-reference table, a sample interactive session, several sample runs, and additional documentation of the net-equity program option.

  20. Quadrupole collective dynamics from energy density functionals: Collective Hamiltonian and the interacting boson model

    SciTech Connect (OSTI)

    Nomura, K.; Vretenar, D.; Niksic, T.; Otsuka, T.; Shimizu, N.

    2011-07-15

    Microscopic energy density functionals have become a standard tool for nuclear structure calculations, providing an accurate global description of nuclear ground states and collective excitations. For spectroscopic applications, this framework has to be extended to account for collective correlations related to restoration of symmetries broken by the static mean field, and for fluctuations of collective variables. In this paper, we compare two approaches to five-dimensional quadrupole dynamics: the collective Hamiltonian for quadrupole vibrations and rotations and the interacting boson model (IBM). The two models are compared in a study of the evolution of nonaxial shapes in Pt isotopes. Starting from the binding energy surfaces of {sup 192,194,196}Pt, calculated with a microscopic energy density functional, we analyze the resulting low-energy collective spectra obtained from the collective Hamiltonian, and the corresponding IBM Hamiltonian. The calculated excitation spectra and transition probabilities for the ground-state bands and the {gamma}-vibration bands are compared to the corresponding sequences of experimental states.

  1. Comparing the Performance of Blue Gene/Q with Leading Cray XE6 and InfiniBand Systems

    SciTech Connect (OSTI)

    Kerbyson, Darren J.; Barker, Kevin J.; Vishnu, Abhinav; Hoisie, Adolfy

    2013-01-21

    AbstractThree types of systems dominate the current High Performance Computing landscape: the Cray XE6, the IBM Blue Gene, and commodity clusters using InfiniBand. These systems have quite different characteristics making the choice for a particular deployment difficult. The XE6 uses Crays proprietary Gemini 3-D torus interconnect with two nodes at each network endpoint. The latest IBM Blue Gene/Q uses a single socket integrating processor and communication in a 5-D torus network. InfiniBand provides the flexibility of using nodes from many vendors connected in many possible topologies. The performance characteristics of each vary vastly along with their utilization model. In this work we compare the performance of these three systems using a combination of micro-benchmarks and a set of production applications. In particular we discuss the causes of variability in performance across the systems and also quantify where performance is lost using a combination of measurements and models. Our results show that significant performance can be lost in normal production operation of the Cray XT6 and InfiniBand Clusters in comparison to Blue Gene/Q.

  2. Tracking the Performance Evolution of Blue Gene Systems

    SciTech Connect (OSTI)

    Kerbyson, Darren J.; Barker, Kevin J.; Gallo, Diego S.; Chen, Dong; Brunheroto, Jose R.; Ryu, Kyung D.; Chiu, George L.; Hoisie, Adolfy

    2013-06-17

    IBMs Blue Gene supercomputer has evolved through three generations from the original Blue Gene/L to P to Q. A higher level of integration has enabled greater single-core performance, and a larger concurrency per compute node. Although these changes have brought with them a higher overall system peak-performance, no study has examined in detail the evolution of perfor-mance across system generations. In this work we make two significant contri-butions that of providing a comparative performance analysis across Blue Gene generations using a consistent set of tests, and also in providing a validat-ed performance model of the NEK-Bone proxy application. The combination of empirical analysis and the predictive performance model enable us to not only directly compare measured performance but also allow for a comparison of sys-tem configurations that cannot currently be measured. We provide insights into how the changing characteristics of Blue Gene have impacted on the application performance, as well as what future systems may be able to achieve.

  3. Automation and optimization of the design parameters in tactical military pipeline systems. Master's thesis

    SciTech Connect (OSTI)

    Frick, R.M.

    1988-12-01

    Tactical military petroleum pipeline systems will play a vital role in any future conflict due to an increased consumption of petroleum products by our combined Armed Forces. The tactical pipeline must be rapidly constructed and highly mobile to keep pace with the constantly changing battle zone. Currently, the design of these pipeline system is time consuming and inefficient, which may cause shortages of fuel and pipeline components at the front lines. Therefore, a need for a computer program that will both automate and optimize the pipeline design process is quite apparent. These design needs are satisfied by developing a software package using Advance Basic (IBM DOS) programming language and made to run on an IBM-compatible personal computer. The program affords the user the options of either finding the optimum pump station locations for a proposed pipeline or calculating the maximum operating pressures for an existing pipeline. By automating the design procedure, a field engineer can vary the pipeline length, diameter, roughness, viscosity, gravity, flow rate, pump station pressure, or terrain profile and see how it affects the other parameters in just a few seconds. The design process was optimized by implementing a weighting scheme based on the volume percent of each fuel in the pipeline at any given time.

  4. A valiant little terminal: A VLT user`s manual. Revision 4

    SciTech Connect (OSTI)

    Weinstein, A.

    1992-08-01

    VLT came to be used at SLAC (Stanford Linear Accelerator Center), because SLAC wanted to assess the Amiga`s usefulness as a color graphics terminal and T{sub E}X workstation. Before the project could really begin, the people at SLAC needed a terminal emulator which could successfully talk to the IBM 3081 (now the IBM ES9000-580) and all the VAXes on the site. Moreover, it had to compete in quality with the Ann Arbor Ambassador GXL terminals which were already in use at the laboratory. Unfortunately, at the time there was no commercial program which fit the bill. Luckily, Willy Langeveld had been independently hacking up a public domain VT100 emulator written by Dave Wecker et al. and the result, VLT, suited SLAC`s purpose. Over the years, as the program was debugged and rewritten, the original code disappeared, so that now, in the present version of VLT, none of the original VT100 code remains.

  5. Surveillance data bases, analysis, and standardization program

    SciTech Connect (OSTI)

    Kam, F.B.K.

    1990-09-26

    The traveler presented a paper at the Seventh ASTM-EURATOM Symposium on Reactor Dosimetry and co-chaired an oral session on Computer Codes and Methods. Papers of considerable interest to the NRC Surveillance Dosimetry Program involved statistically based adjustment procedures and uncertainties. The information exchange meetings with Czechoslovakia and Hungary were very enlightening. Lack of large computers have hindered their surveillance program. They depended very highly on information from their measurement programs which were somewhat limited because of the lack of sophisticated electronics. The Nuclear Research Institute at Rez had to rely on expensive mockups of power reactor configurations to test their fluence exposures. Computers, computer codes, and updated nuclear data would advance their technology rapidly, and they were not hesitant to admit this fact. Both eastern-bloc countries said that IBM is providing an IBM 3090 for educational purposes but research and development studies would have very limited access. They were very apologetic that their currencies were not convertible, and any exchange means that they could provide services or pay for US scientists in their respective countries, but funding for their scientists in the United States, or expenses that involved payment in dollars, must come from us.

  6. PC Basic Linear Algebra Subroutines

    Energy Science and Technology Software Center (OSTI)

    1992-03-09

    PC-BLAS is a highly optimized version of the Basic Linear Algebra Subprograms (BLAS), a standardized set of thirty-eight routines that perform low-level operations on vectors of numbers in single and double-precision real and complex arithmetic. Routines are included to find the index of the largest component of a vector, apply a Givens or modified Givens rotation, multiply a vector by a constant, determine the Euclidean length, perform a dot product, swap and copy vectors, andmore » find the norm of a vector. The BLAS have been carefully written to minimize numerical problems such as loss of precision and underflow and are designed so that the computation is independent of the interface with the calling program. This independence is achieved through judicious use of Assembly language macros. Interfaces are provided for Lahey Fortran 77, Microsoft Fortran 77, and Ryan-McFarland IBM Professional Fortran.« less

  7. A brief summary on formalizing parallel tensor distributions redistributions and algorithm derivations.

    SciTech Connect (OSTI)

    Schatz, Martin D.; Kolda, Tamara G.; van de Geijn, Robert

    2015-09-01

    Large-scale datasets in computational chemistry typically require distributed-memory parallel methods to perform a special operation known as tensor contraction. Tensors are multidimensional arrays, and a tensor contraction is akin to matrix multiplication with special types of permutations. Creating an efficient algorithm and optimized im- plementation in this domain is complex, tedious, and error-prone. To address this, we develop a notation to express data distributions so that we can apply use automated methods to find optimized implementations for tensor contractions. We consider the spin-adapted coupled cluster singles and doubles method from computational chemistry and use our methodology to produce an efficient implementation. Experiments per- formed on the IBM Blue Gene/Q and Cray XC30 demonstrate impact both improved performance and reduced memory consumption.

  8. Agenda

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Agenda Agenda Thursday, February 22 8:00 - 8:30 Pastries and coffee available 8:30 - 8:45 Rob Ryne Introductions 8:45 - 9:30 Walt Polansky Perspectives from Washington 9:30 - 10:30 Bill Kramer Status reports: IBM SP Phase 2 plans, NERSC-4 plans, NERSC-2 decommissioning 10:30 - 10:45 Break 10:45 - 11:15 Bill Kramer Lessons Learned from the last Greenbook 11:15 - 11:45 Doug Rotman Goals for the next Greenbook 11:45 - 12:15 Tour of the Oakland Facility 12:15 - 1:45 Lunch 1:45 - 2:30 Mike Minkoff

  9. An Improved Multigroup Monte Carlo Criticality Code System for Cross Section Processing.

    Energy Science and Technology Software Center (OSTI)

    1982-11-23

    Version 00 KENO-IV1 is an improvement and extension of KENO2 which was contributed by Oak Ridge National Laboratory and written for the IBM 360 computer. It is flexibly dimensioned, utilizes free-form input, and offers more geometry options than KENO. KENO-IV solves nuclear criticality eigenvalue problems. The results calculated by KENO-IV include k-effective, lifetime and generation time, energy-dependent leakages and absorptions, energy- and region-dependent fluxes and region-dependent fission densities. Criticality searches can be made on unitmore » dimensions or on the number of units in an array. KENO-IV/CRC has several added features which include a neutron balance edit, PICTURE3 routines to check the input geometry, and a random number sequencing subroutine written in Fortran for the Cray-1 computer.« less

  10. TOP500 Supercomputers for June 2002

    SciTech Connect (OSTI)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  11. Parallel Scaling Characteristics of Selected NERSC User ProjectCodes

    SciTech Connect (OSTI)

    Skinner, David; Verdier, Francesca; Anand, Harsh; Carter,Jonathan; Durst, Mark; Gerber, Richard

    2005-03-05

    This report documents parallel scaling characteristics of NERSC user project codes between Fiscal Year 2003 and the first half of Fiscal Year 2004 (Oct 2002-March 2004). The codes analyzed cover 60% of all the CPU hours delivered during that time frame on seaborg, a 6080 CPU IBM SP and the largest parallel computer at NERSC. The scale in terms of concurrency and problem size of the workload is analyzed. Drawing on batch queue logs, performance data and feedback from researchers we detail the motivations, benefits, and challenges of implementing highly parallel scientific codes on current NERSC High Performance Computing systems. An evaluation and outlook of the NERSC workload for Allocation Year 2005 is presented.

  12. Polymer Hybrid Photovoltaics for Inexpensive Electricity Generation: Final Technical Report, 1 September 2001--30 April 2006

    SciTech Connect (OSTI)

    Carter, S. A.

    2006-07-01

    The project goal is to understand the operating mechanisms underlying the performance of polymer hybrid photovoltaics to enable the development of a photovoltaic with a maximum power conversion efficiency over cost ratio that is significantly greater than current PV technologies. Plastic or polymer-based photovoltaics can have significant cost advantages over conventional technologies in that they are compatible with liquid-based plastic processing and can be assembled onto plastic under atmospheric conditions (ambient temperature and pressure) using standard printing technologies, such as reel-to-reel and screen printing. Moreover, polymer-based PVs are lightweight, flexible, and largely unbreakable, which make shipping, installation, and maintenance simpler. Furthermore, a numerical simulation program was developed (in collaboration with IBM) to fully simulate the performance of multicomponent polymer photovoltaic devices, and a manufacturing method was developed (in collaboration with Add-vision) to inexpensively manufacture larger-area devices.

  13. Simple Electric Vehicle Simulation

    Energy Science and Technology Software Center (OSTI)

    1993-07-29

    SIMPLEV2.0 is an electric vehicle simulation code which can be used with any IBM compatible personal computer. This general purpose simulation program is useful for performing parametric studies of electric and series hybrid electric vehicle performance on user input driving cycles.. The program is run interactively and guides the user through all of the necessary inputs. Driveline components and the traction battery are described and defined by ASCII files which may be customized by themore » user. Scaling of these components is also possible. Detailed simulation results are plotted on the PC monitor and may also be printed on a printer attached to the PC.« less

  14. Modeling and simulation of Red Teaming. Part 1, Why Red Team M&S?

    SciTech Connect (OSTI)

    Skroch, Michael J.

    2009-11-01

    Red teams that address complex systems have rarely taken advantage of Modeling and Simulation (M&S) in a way that reproduces most or all of a red-blue team exchange within a computer. Chess programs, starting with IBM's Deep Blue, outperform humans in that red-blue interaction, so why shouldn't we think computers can outperform traditional red teams now or in the future? This and future position papers will explore possible ways to use M&S to augment or replace traditional red teams in some situations, the features Red Team M&S should possess, how one might connect live and simulated red teams, and existing tools in this domain.

  15. A BLAS-3 version of the QR factorization with column pivoting

    SciTech Connect (OSTI)

    Quintana-Orti, G.; Sun, X.; Bischof, C.H.

    1998-09-01

    The QR factorization with column pivoting (QRP), originally suggested by Golub is a popular approach to computing rank-revealing factorizations. Using Level 1 BLAS, it was implemented in LINPACK, and, using Level 2 BLAS, in LAPACK. While the Level 2 BLAS version delivers superior performance in general, it may result in worse performance for large matrix sizes due to cache effects. The authors introduce a modification of the QRP algorithm which allows the use of Level 3 BLAs kernels while maintaining the numerical behavior of the LINPACK and LAPACK implementations. Experimental comparisons of this approach with the LINPACK and LAPACK implementations on IBM RS/6000, SGI R8000, and DEC AXP platforms show considerable performance improvements.

  16. Computer Algebra System

    Energy Science and Technology Software Center (OSTI)

    1992-05-04

    DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less

  17. EIA directory of electronic products. Second quarter 1995

    SciTech Connect (OSTI)

    1995-10-04

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. They are available to the public on magnetic tapes; selected data files/models are available on diskette for IBM-compatible personal computers. This directory first presents the on-line files and compact discs. This is followed by descriptions and technical contacts and ordering and other information on the data files and models. An index by energy source is included. Additional ordering information is in the preface. The data files cover petroleum, natural gas, electricity, coal, integrated statistics, and consumption; the models cover petroleum, natural gas, electricity, coal, nuclear, and multifuel.

  18. PCDAS Version 2. 2: Remote network control and data acquisition

    SciTech Connect (OSTI)

    Fishbaugher, M.J.

    1987-09-01

    This manual is intended for both technical and non-technical people who want to use the PCDAS remote network control and data acquisition software. If you are unfamiliar with remote data collection hardware systems designed at Pacific Northwest Laboratory (PNL), this introduction should answer your basic questions. Even if you have some experience with the PNL-designed Field Data Acquisition Systems (FDAS), it would be wise to review this material before attempting to set up a network. This manual was written based on the assumption that you have a rudimentary understanding of personal computer (PC) operations using Disk Operating System (DOS) version 2.0 or greater (IBM 1984). You should know how to create subdirectories and get around the subdirectory tree.

  19. Simulation of oil-slick transport in Great Lakes connecting channels. User's manual for the River Spill Simulation Model (ROSS). Special report

    SciTech Connect (OSTI)

    Shen, H.T.; Yapa, P.D.; Petroski, M.E.

    1991-12-01

    Two computer models, named ROSS and LROSS, have been developed for simulating oil slick transport in rivers and lakes, respectively. The oil slick transformation processes considered in these models include advection, spreading, evaporation and dissolution. These models can be used for slicks of any shape originating from instantaneous or continuous spills in rivers and lakes with or without ice covers. Although developed for the connecting channels in the upper Great Lakes, including the Detroit River, Lake St. Clair, the St. Clair River and the St. Marys River, these models are site independent and can be used for other rivers and lakes. The programs are written in FORTRAN programming language to be compatible with the FORTRAN77 compiler. In addition, a user-friendly, menu-driven program with graphics capability was developed for the IBM-PC AT computer, so that these models can be easily used to assist the cleanup action in the connecting channels should an oil spill occur.

  20. Trainblazing with roadrunner

    SciTech Connect (OSTI)

    Henning, Paul J; White, Andrew B

    2009-01-01

    In June 2008, a new supercomputer broke the petaflop/s performance barrier, more than doubling the computational performance of the next fastest machine on the TopSOO Supercomputing Sites list (http://topSOO.org).This computer, named Roadrunner, is the result of an intensive collaboration between IBM and Los Alamos National Laboratory, where it is now located. Aside from its performance, Roadrunner has two distinguishing characteristics: a very good power/performance ratio and a 'hybrid' computer architecture that mixes several types of processors. By November 2008, the traditionally-architected Jaguar computer at Oak Ridge National Laboratory was neck-and-neck with Roadrunner in the performance race, but it requires almost 2.8 times the electric power of Roadrunner. This difference translates into millions of dollars per year in operating costs.

  1. SIMPLEV: A simple electric vehicle simulation program, Version 1.0

    SciTech Connect (OSTI)

    Cole, G.H.

    1991-06-01

    An electric vehicle simulation code which can be used with any IBM compatible personal computer was written. This general purpose simulation program is useful for performing parametric studies of electric vehicle performance on user input driving cycles. The program is run interactively and guides the user through all of the necessary inputs. Driveline components and the traction battery are described and defined by ASCII files which may be customized by the user. Scaling of these components is also possible. Detailed simulation results are plotted on the PC monitor and may also be printed on a printer attached to the PC. This report serves as a users` manual and documents the mathematical relationships used in the simulation.

  2. Thermal Hydraulic Computer Code System.

    Energy Science and Technology Software Center (OSTI)

    1999-07-16

    Version 00 RELAP5 was developed to describe the behavior of a light water reactor (LWR) subjected to postulated transients such as loss of coolant from large or small pipe breaks, pump failures, etc. RELAP5 calculates fluid conditions such as velocities, pressures, densities, qualities, temperatures; thermal conditions such as surface temperatures, temperature distributions, heat fluxes; pump conditions; trip conditions; reactor power and reactivity from point reactor kinetics; and control system variables. In addition to reactor applications,more » the program can be applied to transient analysis of other thermal‑hydraulic systems with water as the fluid. This package contains RELAP5/MOD1/029 for CDC computers and RELAP5/MOD1/025 for VAX or IBM mainframe computers.« less

  3. Waterflooding in a system of horizontal wells

    SciTech Connect (OSTI)

    Bedrikovetsky, P.G.; Magarshak, T.O.; Shapiro, A.A.

    1995-10-01

    An approximate analytical method for the simulation of waterflooding in a system of horizontal wells is developed. The method is based on an advanced stream-line concept. The essence of this new method is the exact solution for the 3D two-phase flow problem in the system of coordinates linked with the stream lines under the only assumption of the immobility of stream lines. A software based on this approach was developed for IBM-compatible PC. It allows one multivariant comparative studies of immiscible displacement in systems of horizontal, vertical and slant wells. The simulator has been used in order to optimize geometrical parameters of a regular well system and to predict recovery in conditions of Prirazlomnoye offshore oil field.

  4. User's guide to a data base of current environmental monitoring projects in the US-Canadian transboundary region

    SciTech Connect (OSTI)

    Ballinger, M.Y.; Defferding, J.; Chapman, E.G.; Bettinson, M.D.; Glantz, C.S.

    1987-11-01

    This document describes how to use a data base of current transboundary region environmental monitoring projects. The data base was prepared from data provided by Glantz et al. (1986) and Concord Scientific Corporation (1985), and contains information on 226 projects with monitoring stations located within 400 km (250 mi) of the US-Canadian border. The data base is designed for use with the dBASE III PLUS data management systems on IBM-compatible personal computers. Data-base searches are best accomplished using an accompanying command file called RETRIEVE or the dBASE command LIST. The user must carefully select the substrings on which the search is to be based. Example search requests and subsequent output are presented to illustrate substring selections and applications of the data base. 4 refs., 15 figs., 4 tabs.

  5. LAMMPS strong scaling performance optimization on Blue Gene/Q

    SciTech Connect (OSTI)

    Coffman, Paul; Jiang, Wei; Romero, Nichols A.

    2014-11-12

    LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using an 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.

  6. 17th Edition of TOP500 List of World's Fastest SupercomputersReseased

    SciTech Connect (OSTI)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack J.; Simon,Horst D.

    2001-06-21

    17th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, GERMANY; KNOXVILLE, TENN.; BERKELEY, CALIF. In what has become a much-anticipated event in the world of high-performance computing, the 17th edition of the TOP500 list of the world's fastest supercomputers was released today (June 21). The latest edition of the twice-yearly ranking finds IBM as the leader in the field, with 40 percent in terms of installed systems and 43 percent in terms of total performance of all the installed systems. In second place in terms of installed systems is Sun Microsystems with 16 percent, while Cray Inc. retained second place in terms of performance (13 percent). SGI Inc. was third both with respect to systems with 63 (12.6 percent) and performance (10.2 percent).

  7. Application of bar codes to the automation of analytical sample data collection

    SciTech Connect (OSTI)

    Jurgensen, H A

    1986-01-01

    The Health Protection Department at the Savannah River Plant collects 500 urine samples per day for tritium analyses. Prior to automation, all sample information was compiled manually. Bar code technology was chosen for automating this program because it provides a more accurate, efficient, and inexpensive method for data entry. The system has three major functions: sample labeling is accomplished at remote bar code label stations composed of an Intermec 8220 (Intermec Corp.) interfaced to an IBM-PC, data collection is done on a central VAX 11/730 (Digital Equipment Corp.). Bar code readers are used to log-in samples to be analyzed on liquid scintillation counters. The VAX 11/730 processes the data and generates reports, data storage is on the VAX 11/730 and backed up on the plant's central computer. A brief description of several other bar code applications at the Savannah River Plant is also presented.

  8. HANSF 1.3 Users Manual FAI/98-40-R2 Hanford Spent Nuclear Fuel (SNF) Safety Analysis Model [SEC 1 and 2

    SciTech Connect (OSTI)

    DUNCAN, D.R.

    1999-10-07

    The HANSF analysis tool is an integrated model considering phenomena inside a multi-canister overpack (MCO) spent nuclear fuel container such as fuel oxidation, convective and radiative heat transfer, and the potential for fission product release. This manual reflects the HANSF version 1.3.2, a revised version of 1.3.1. HANSF 1.3.2 was written to correct minor errors and to allow modeling of condensate flow on the MCO inner surface. HANSF 1.3.2 is intended for use on personal computers such as IBM-compatible machines with Intel processors running under Lahey TI or digital Visual FORTRAN, Version 6.0, but this does not preclude operation in other environments.

  9. Automated system for handling tritiated mixed waste

    SciTech Connect (OSTI)

    Dennison, D.K.; Merrill, R.D.; Reitz, T.C.

    1995-03-01

    Lawrence Livermore National Laboratory (LLNL) is developing a semi system for handling, characterizing, processing, sorting, and repackaging hazardous wastes containing tritium. The system combines an IBM-developed gantry robot with a special glove box enclosure designed to protect operators and minimize the potential release of tritium to the atmosphere. All hazardous waste handling and processing will be performed remotely, using the robot in a teleoperational mode for one-of-a-kind functions and in an autonomous mode for repetitive operations. Initially, this system will be used in conjunction with a portable gas system designed to capture any gaseous-phase tritium released into the glove box. This paper presents the objectives of this development program, provides background related to LLNL`s robotics and waste handling program, describes the major system components, outlines system operation, and discusses current status and plans.

  10. Testing of the Eberline PCM-2

    SciTech Connect (OSTI)

    Howe, K.L.

    1994-12-23

    The PCM-2 manufactured by Eberline Instruments is a whole body monitor that detects both alpha and beta contamination. The PCM-2 uses an IBM compatible personal computer for all software functions. The PCM-2 has 34 large area detectors which can cover approximately 40% of the body at a time. This requires two counting cycles to cover approximately 80% of the body. With the normal background seen at Rocky Flats, each count time takes approximately 15--20 seconds. There are a number of beta and gamma whole body monitors available from different manufacturers, but an alpha whole body monitor is a rarity. Because of the need for alpha whole body monitors at The Rocky Flats Environmental Technology Site, it was decided to do thorough testing on the PCM-2. A three month test was run in uranium building and a three month test in a plutonium building to verify the alpha capabilities of the PCM-2.

  11. Integrated Air Pollution Control System (IAPCS), Executable Model and Source Model (version 4. 0) (for microcomputers). Model-Simulation

    SciTech Connect (OSTI)

    Not Available

    1990-10-29

    The Integrated Air Pollution Control System (IAPCS) Cost Model is an IBM PC cost model that can be used to estimate the cost of installing SO2, NOx, and particulate matter control systems at coal-fired utility electric generating facilities. The model integrates various combinations of the following technologies: physical coal cleaning, coal switching, overfire air/low NOx burners, natural gas reburning, LIMB, ADVACATE, electrostatic precipitator, fabric filter, gas conditioning, wet lime or limestone FGD, lime spray drying/duct spray drying, dry sorbent injection, pressurized fluidized bed combustion, integrated gasification combined cycle, and pulverized coal burning boiler. The model generates capital, annualized, and unitized pollutant removal costs in either constant or current dollars for any year.

  12. The NUCLARR databank: Human reliability and hardware failure data for the nuclear power industry

    SciTech Connect (OSTI)

    Reece, W.J.

    1993-05-01

    Under the sponsorship of the US Nuclear Regulatory Commission (NRC), the Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) was developed to provide human reliability and hardware failure data to analysts in the nuclear power industry. This IBM-compatible databank is contained on a set of floppy diskettes which include data files and a menu-driven system for locating, reviewing, sorting, and retrieving the data. NUCLARR contains over 2500 individual data records, drawn from more, than 60 sources. The system is upgraded annually, to include additional human error and hardware component failure data and programming enhancements (i.e., increased user-friendliness). NUCLARR is available from the NRC through project staff at the INEL.

  13. Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR): Programmer's guide

    SciTech Connect (OSTI)

    Call, O. J.; Jacobson, J. A.

    1988-09-01

    The Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) is an automated data base management system for processing and storing human error probability and hardware component failure data. The NUCLARR system software resides on an IBM (or compatible) personal micro-computer and can be used to furnish data inputs for both human and hardware reliability analysis in support of a variety of risk assessment activities. The NUCLARR system is documented in a five-volume series of reports. Volume 2 of this series is the Programmer's Guide for maintaining the NUCLARR system software. This Programmer's Guide provides, for the software engineer, an orientation to the software elements involved, discusses maintenance methods, and presents useful aids and examples. 4 refs., 75 figs., 1 tab.

  14. Quantum Monte Carlo by message passing

    SciTech Connect (OSTI)

    Bonca, J.; Gubernatis, J.E.

    1993-01-01

    We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green's function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.

  15. Quantum Monte Carlo by message passing

    SciTech Connect (OSTI)

    Bonca, J.; Gubernatis, J.E.

    1993-05-01

    We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green`s function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.

  16. Performance Tuning of Fock Matrix and Two-Electron Integral Calculations for NWChem on Leading HPC Platforms

    SciTech Connect (OSTI)

    Shan, Hongzhan; Austin, Brian M.; De Jong, Wibe A.; Oliker, Leonid; Wright, Nicholas J.; Apra, Edoardo

    2014-10-01

    Attaining performance in the evaluation of two-electron repulsion integrals and constructing the Fock matrix is of considerable importance to the computational chemistry community. Due to its numerical complexity improving the performance behavior across a variety of leading supercomputing platforms is an increasing challenge due to the significant diversity in high-performance computing architectures. In this paper, we present our successful tuning methodology for these important numerical methods on the Cray XE6, the Cray XC30, the IBM BG/Q, as well as the Intel Xeon Phi. Our optimization schemes leverage key architectural features including vectorization and simultaneous multithreading, and results in speedups of up to 2.5x compared with the original implementation.

  17. Integrated Air Pollution Control System (IAPCS), Executable Model (Version 4. 0) (for microcomputers). Model-Simulation

    SciTech Connect (OSTI)

    Not Available

    1990-10-29

    The Integrated Air Pollution Control System (IAPCS) Cost Model is an IBM PC cost model that can be used to estimate the cost of installing SO2, NOx, and particulate matter control systems at coal-fired utility electric generating facilities. The model integrates various combinations of the following technologies: physical coal cleaning, coal switching, overfire air/low NOx burners, natural gas reburning, LIMB, ADVACATE, electrostatic precipitator, fabric filter, gas conditioning, wet lime or limestone FGD, lime spray drying/duct spray drying, dry sorbent injection, pressurized fluidized bed combustion, integrated gasification combined cycle, and pulverized coal burning boiler. The model generates capital, annualized, and unitized pollutant removal costs in either constant or current dollars for any year.

  18. Development of a discrete ordinates code system for unstructured meshes of tetrahedral cells, with serial and parallel implementations

    SciTech Connect (OSTI)

    Miller, R.L.

    1998-11-01

    A numerically stable, accurate, and robust form of the exponential characteristic (EC) method, used to solve the time-independent linearized Boltzmann Transport Equation, is derived using direct affine coordinate transformations on unstructured meshes of tetrahedra. This quadrature, as well as the linear characteristic (LC) spatial quadrature, is implemented in the transport code, called TETRAN. This code solves multi-group neutral particle transport problems with anisotropic scattering and was parallelized using High Performance Fortran and angular domain decomposition. A new, parallel algorithm for updating the scattering source is introduced. The EC source and inflow flux coefficients are efficiently evaluated using Broyden`s rootsolver, started with special approximations developed here. TETRAN showed robustness, stability and accuracy on a variety of challenging test problems. Parallel speed-up was observed as the number of processors was increased using an IBM SP computer system.

  19. (Sparsity in large scale scientific computation)

    SciTech Connect (OSTI)

    Ng, E.G.

    1990-08-20

    The traveler attended a conference organized by the 1990 IBM Europe Institute at Oberlech, Austria. The theme of the conference was on sparsity in large scale scientific computation. The conference featured many presentations and other activities of direct interest to ORNL research programs on sparse matrix computations and parallel computing, which are funded by the Applied Mathematical Sciences Subprogram of the DOE Office of Energy Research. The traveler presented a talk on his work at ORNL on the development of efficient algorithms for solving sparse nonsymmetric systems of linear equations. The traveler held numerous technical discussions on issues having direct relevance to the research programs on sparse matrix computations and parallel computing at ORNL.

  20. ALCF Future Systems Tim Williams, Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Future Systems Tim Williams, Argonne Leadership Computing Facility DOE Exascale Requirements Review: High Energy Physics June 11, 2015 Production Systems (ALCF-2) 2 Mira - IBM Blue Gene/Q ¥ 49,152 nodes ¡ PowerPC A2 cpu - 16 cores, 4 HW threads/core ¡ 16 GB RAM ¥ Aggregate ¡ 768 TB RAM, 768K cores ¡ Peak 10 PetaFLOPS ¥ 5D torus interconnect Cooley - Viz/Analysis cluster ¥ 126 nodes: ¡ Two 2.4 GHz Intel Haswell 6-core - 384 GB RAM ¡ NVIDIA Tesla K80 (two

  1. A mobile computed tomographic unit for inspecting reinforced concrete columns

    SciTech Connect (OSTI)

    Sumitra, T.; Srisatit, S.; Pattarasumunt, A.

    1994-12-31

    A mobile computed tomographic unit applicable in the inspection of reinforced concrete columns was designed, constructed and tested. A CT image reconstruction programme written in Quick Basic was first developed to be used on an IBM PC/AT microcomputer. It provided user friendly menus for data processing and displaying CT image. The prototype of a gamma-ray scanning system using a 1.11 GBq Cs-137 source and a NaI(T1) scintillation detector was also designed and constructed. The system was a microcomputer controlled, single-beam rotate-translate scanner used for collecting transmitted gamma-ray data in different angles. The CT unit was finally tested with a standard column and a column of an existing building. The cross sectional images of the columns could be clearly seen. The positions and sizes of the reinforced bars could be estimated.

  2. Challenges of Algebraic Multigrid across Multicore Architectures

    SciTech Connect (OSTI)

    Baker, A H; Gamblin, T; Schulz, M; Yang, U M

    2010-04-12

    Algebraic multigrid (AMG) is a popular solver for large-scale scientific computing and an essential component of many simulation codes. AMG has shown to be extremely efficient on distributed-memory architectures. However, when executed on modern multicore architectures, we face new challenges that can significantly deteriorate AMG's performance. We examine its performance and scalability on three disparate multicore architectures: a cluster with four AMD Opteron Quad-core processors per node (Hera), a Cray XT5 with two AMD Opteron Hex-core processors per node (Jaguar), and an IBM BlueGene/P system with a single Quad-core processor (Intrepid). We discuss our experiences on these platforms and present results using both an MPI-only and a hybrid MPI/OpenMP model. We also discuss a set of techniques that helped to overcome the associated problems, including thread and process pinning and correct memory associations.

  3. MATCHED FILTER COMPUTATION ON FPGA, CELL, AND GPU

    SciTech Connect (OSTI)

    BAKER, ZACHARY K.; GOKHALE, MAYA B.; TRIPP, JUSTIN L.

    2007-01-08

    The matched filter is an important kernel in the processing of hyperspectral data. The filter enables researchers to sift useful data from instruments that span large frequency bands. In this work, they evaluate the performance of a matched filter algorithm implementation on accelerated co-processor (XD1000), the IBM Cell microprocessor, and the NVIDIA GeForce 6900 GTX GPU graphics card. They provide extensive discussion of the challenges and opportunities afforded by each platform. In particular, they explore the problems of partitioning the filter most efficiently between the host CPU and the co-processor. Using their results, they derive several performance metrics that provide the optimal solution for a variety of application situations.

  4. Data Foundry: Data Warehousing and Integration for Scientific Data Management

    SciTech Connect (OSTI)

    Musick, R.; Critchlow, T.; Ganesh, M.; Fidelis, Z.; Zemla, A.; Slezak, T.

    2000-02-29

    Data warehousing is an approach for managing data from multiple sources by representing them with a single, coherent point of view. Commercial data warehousing products have been produced by companies such as RebBrick, IBM, Brio, Andyne, Ardent, NCR, Information Advantage, Informatica, and others. Other companies have chosen to develop their own in-house data warehousing solution using relational databases, such as those sold by Oracle, IBM, Informix and Sybase. The typical approaches include federated systems, and mediated data warehouses, each of which, to some extent, makes use of a series of source-specific wrapper and mediator layers to integrate the data into a consistent format which is then presented to users as a single virtual data store. These approaches are successful when applied to traditional business data because the data format used by the individual data sources tends to be rather static. Therefore, once a data source has been integrated into a data warehouse, there is relatively little work required to maintain that connection. However, that is not the case for all data sources. Data sources from scientific domains tend to regularly change their data model, format and interface. This is problematic because each change requires the warehouse administrator to update the wrapper, mediator, and warehouse interfaces to properly read, interpret, and represent the modified data source. Furthermore, the data that scientists require to carry out research is continuously changing as their understanding of a research question develops, or as their research objectives evolve. The difficulty and cost of these updates effectively limits the number of sources that can be integrated into a single data warehouse, or makes an approach based on warehousing too expensive to consider.

  5. Petascale Parallelization of the Gyrokinetic Toroidal Code

    SciTech Connect (OSTI)

    Ethier, Stephane; Adams, Mark; Carter, Jonathan; Oliker, Leonid

    2010-05-01

    The Gyrokinetic Toroidal Code (GTC) is a global, three-dimensional particle-in-cell application developed to study microturbulence in tokamak fusion devices. The global capability of GTC is unique, allowing researchers to systematically analyze important dynamics such as turbulence spreading. In this work we examine a new radial domain decomposition approach to allow scalability onto the latest generation of petascale systems. Extensive performance evaluation is conducted on three high performance computing systems: the IBM BG/P, the Cray XT4, and an Intel Xeon Cluster. Overall results show that the radial decomposition approach dramatically increases scalability, while reducing the memory footprint - allowing for fusion device simulations at an unprecedented scale. After a decade where high-end computing (HEC) was dominated by the rapid pace of improvements to processor frequencies, the performance of next-generation supercomputers is increasingly differentiated by varying interconnect designs and levels of integration. Understanding the tradeoffs of these system designs is a key step towards making effective petascale computing a reality. In this work, we examine a new parallelization scheme for the Gyrokinetic Toroidal Code (GTC) [?] micro-turbulence fusion application. Extensive scalability results and analysis are presented on three HEC systems: the IBM BlueGene/P (BG/P) at Argonne National Laboratory, the Cray XT4 at Lawrence Berkeley National Laboratory, and an Intel Xeon cluster at Lawrence Livermore National Laboratory. Overall results indicate that the new radial decomposition approach successfully attains unprecedented scalability to 131,072 BG/P cores by overcoming the memory limitations of the previous approach. The new version is well suited to utilize emerging petascale resources to access new regimes of physical phenomena.

  6. Health Physics Positions Data Base: Revision 1

    SciTech Connect (OSTI)

    Kerr, G.D.; Borges, T.; Stafford, R.S.; Lu, P.Y.; Carter, D.

    1994-02-01

    The Health Physics Positions (HPPOS) Data Base of the Nuclear Regulatory Commission (NRC) is a collection of NRC staff positions on a wide range of topics involving radiation protection (health physics). It consists of 328 documents in the form of letters, memoranda, and excerpts from technical reports. The HPPOS Data Base was developed by the NRC Headquarters and Regional Offices to help ensure uniformity in inspections, enforcement, and licensing actions. Staff members of the Oak Ridge National Laboratory (ORNL) have assisted the NRC staff in summarizing the documents during the preparation of this NUREG report. These summaries are also being made available as a {open_quotes}stand alone{close_quotes} software package for IBM and IBM-compatible personal computers. The software package for this report is called HPPOS Version 2.0. A variety of indexing schemes were used to increase the usefulness of the NUREG report and its associated software. The software package and the summaries in the report are written in the context of the {open_quotes}new{close_quotes} 10 CFR Part 20 ({section}{section}20.1001--20.2401). The purpose of this NUREG report is to allow interested individuals to familiarize themselves with the contents of the HPPOS Data Base and with the basis of many NRC decisions and regulations. The HPPOS summaries and original documents are intended to serve as a source of information for radiation protection programs at nuclear research and power reactors, nuclear medicine, and other industries that either process or use nuclear materials.

  7. Project Report on DOE Young Investigator Grant (Contract No. DE-FG02-02ER25525) Dynamic Scheduling and Fusion of Irregular Computation (August 15, 2002 to August 14, 2005)

    SciTech Connect (OSTI)

    Chen Ding

    2005-08-16

    enormous number of data elements. To optimize the layout across multiple arrays, we have developed a formal model called reference affinity. We collaborated with the IBM production compiler group and designed an efficient compiler analysis that performs as well as data or code profiling does. Based on these results, the IBM group has filed a patent and is including this technique in their product compiler. A major part of the project is the development of software tools. We have developed web-based visu- alization for program locality. In addition, we have implemented a prototype of array regrouping in the IBM compiler. The full implementation is expected to come out of IBM in the near future and to benefit scientific applications running on IBM supercomputers. We have also developed a test environment for studying the limit of computation fusion. Finally, our work has directly in?uenced the design of the Intel Itanium compiler. The project has strengthened the research relation between the PI??s group and groups in DoE labs. The PI was an invited speaker at the Center for Applied Scientific Computing Seminar Series at the early stage of the project. The question that the most audience was curious about was the limit of computation fusion, which has been studied in depth in this research. In addition, the seminar directly helped a group at Lawrence Livermore to achieve four times speedup on an important DoE code. The PI helped to organize a number of high-performance computing forums, including the founding of a workshop on memory system performance (MSP). In the past two years, one fourth of the papers in the workshop came from researchers in Lawrence Livermore, Argonne, Las Alamos, and Lawrence Berkeley national laboratories. The PI lectured frequently on DoE funded research. In a broader context, high performance computing is central to America??s scientific and economic stature in the world,

  8. Study of Particle Rotation Effect in Gas-Solid Flows using Direct Numerical Simulation with a Lattice Boltzmann Method

    SciTech Connect (OSTI)

    Kwon, Kyung; Fan, Liang-Shih; Zhou, Qiang; Yang, Hui

    2014-09-30

    A new and efficient direct numerical method with second-order convergence accuracy was developed for fully resolved simulations of incompressible viscous flows laden with rigid particles. The method combines the state-of-the-art immersed boundary method (IBM), the multi-direct forcing method, and the lattice Boltzmann method (LBM). First, the multi-direct forcing method is adopted in the improved IBM to better approximate the no-slip/no-penetration (ns/np) condition on the surface of particles. Second, a slight retraction of the Lagrangian grid from the surface towards the interior of particles with a fraction of the Eulerian grid spacing helps increase the convergence accuracy of the method. An over-relaxation technique in the procedure of multi-direct forcing method and the classical fourth order Runge-Kutta scheme in the coupled fluid-particle interaction were applied. The use of the classical fourth order Runge-Kutta scheme helps the overall IB-LBM achieve the second order accuracy and provides more accurate predictions of the translational and rotational motion of particles. The preexistent code with the first-order convergence rate is updated so that the updated new code can resolve the translational and rotational motion of particles with the second-order convergence rate. The updated code has been validated with several benchmark applications. The efficiency of IBM and thus the efficiency of IB-LBM were improved by reducing the number of the Lagragian markers on particles by using a new formula for the number of Lagrangian markers on particle surfaces. The immersed boundary-lattice Boltzmann method (IBLBM) has been shown to predict correctly the angular velocity of a particle. Prior to examining drag force exerted on a cluster of particles, the updated IB-LBM code along with the new formula for the number of Lagrangian markers has been further validated by solving several theoretical problems. Moreover, the unsteadiness of the drag force is examined when a

  9. 2008 ALCF annual report.

    SciTech Connect (OSTI)

    Drugan, C.

    2009-12-07

    The word 'breakthrough' aptly describes the transformational science and milestones achieved at the Argonne Leadership Computing Facility (ALCF) throughout 2008. The number of research endeavors undertaken at the ALCF through the U.S. Department of Energy's (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program grew from 9 in 2007 to 20 in 2008. The allocation of computer time awarded to researchers on the Blue Gene/P also spiked significantly - from nearly 10 million processor hours in 2007 to 111 million in 2008. To support this research, we expanded the capabilities of Intrepid, an IBM Blue Gene/P system at the ALCF, to 557 teraflops (TF) for production use. Furthermore, we enabled breakthrough levels of productivity and capability in visualization and data analysis with Eureka, a powerful installation of NVIDIA Quadro Plex S4 external graphics processing units. Eureka delivered a quantum leap in visual compute density, providing more than 111 TF and more than 3.2 terabytes of RAM. On April 21, 2008, the dedication of the ALCF realized DOE's vision to bring the power of the Department's high performance computing to open scientific research. In June, the IBM Blue Gene/P supercomputer at the ALCF debuted as the world's fastest for open science and third fastest overall. No question that the science benefited from this growth and system improvement. Four research projects spearheaded by Argonne National Laboratory computer scientists and ALCF users were named to the list of top ten scientific accomplishments supported by DOE's Advanced Scientific Computing Research (ASCR) program. Three of the top ten projects used extensive grants of computing time on the ALCF's Blue Gene/P to model the molecular basis of Parkinson's disease, design proteins at atomic scale, and create enzymes. As the year came to a close, the ALCF was recognized with several prestigious awards at SC08 in November. We provided resources for Linear Scaling Divide

  10. HARE: Final Report

    SciTech Connect (OSTI)

    Mckie, Jim

    2012-01-09

    This report documents the results of work done over a 6 year period under the FAST-OS programs. The first effort was called Right-Weight Kernels, (RWK) and was concerned with improving measurements of OS noise so it could be treated quantitatively; and evaluating the use of two operating systems, Linux and Plan 9, on HPC systems and determining how these operating systems needed to be extended or changed for HPC, while still retaining their general-purpose nature. The second program, HARE, explored the creation of alternative runtime models, building on RWK. All of the HARE work was done on Plan 9. The HARE researchers were mindful of the very good Linux and LWK work being done at other labs and saw no need to recreate it. Even given this limited funding, the two efforts had outsized impact: _ Helped Cray decide to use Linux, instead of a custom kernel, and provided the tools needed to make Linux perform well _ Created a successor operating system to Plan 9, NIX, which has been taken in by Bell Labs for further development _ Created a standard system measurement tool, Fixed Time Quantum or FTQ, which is widely used for measuring operating systems impact on applications _ Spurred the use of the 9p protocol in several organizations, including IBM _ Built software in use at many companies, including IBM, Cray, and Google _ Spurred the creation of alternative runtimes for use on HPC systems _ Demonstrated that, with proper modifications, a general purpose operating systems can provide communications up to 3 times as effective as user-level libraries Open source was a key part of this work. The code developed for this project is in wide use and available at many places. The core Blue Gene code is available at https://bitbucket.org/ericvh/hare. We describe details of these impacts in the following sections. The rest of this report is organized as follows: First, we describe commercial impact; next, we describe the FTQ benchmark and its impact in more detail; operating

  11. Simulating atmosphere flow for wind energy applications with WRF-LES

    SciTech Connect (OSTI)

    Lundquist, J K; Mirocha, J D; Chow, F K; Kosovic, B; Lundquist, K A

    2008-01-14

    Forecasts of available wind energy resources at high spatial resolution enable users to site wind turbines in optimal locations, to forecast available resources for integration into power grids, to schedule maintenance on wind energy facilities, and to define design criteria for next-generation turbines. This array of research needs implies that an appropriate forecasting tool must be able to account for mesoscale processes like frontal passages, surface-atmosphere interactions inducing local-scale circulations, and the microscale effects of atmospheric stability such as breaking Kelvin-Helmholtz billows. This range of scales and processes demands a mesoscale model with large-eddy simulation (LES) capabilities which can also account for varying atmospheric stability. Numerical weather prediction models, such as the Weather and Research Forecasting model (WRF), excel at predicting synoptic and mesoscale phenomena. With grid spacings of less than 1 km (as is often required for wind energy applications), however, the limits of WRF's subfilter scale (SFS) turbulence parameterizations are exposed, and fundamental problems arise, associated with modeling the scales of motion between those which LES can represent and those for which large-scale PBL parameterizations apply. To address these issues, we have implemented significant modifications to the ARW core of the Weather Research and Forecasting model, including the Nonlinear Backscatter model with Anisotropy (NBA) SFS model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005).We are also modifying WRF's terrain-following coordinate system by implementing an immersed boundary method (IBM) approach to account for the effects of complex terrain. Companion papers presenting idealized simulations with NBA-RSFS-WRF (Mirocha et al.) and IBM-WRF (K. A. Lundquist et al.) are also presented. Observations of flow

  12. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    SciTech Connect (OSTI)

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  13. Code manual for MACCS2: Volume 1, user`s guide

    SciTech Connect (OSTI)

    Chanin, D.I.; Young, M.L.

    1997-03-01

    This report describes the use of the MACCS2 code. The document is primarily a user`s guide, though some model description information is included. MACCS2 represents a major enhancement of its predecessor MACCS, the MELCOR Accident Consequence Code System. MACCS, distributed by government code centers since 1990, was developed to evaluate the impacts of severe accidents at nuclear power plants on the surrounding public. The principal phenomena considered are atmospheric transport and deposition under time-variant meteorology, short- and long-term mitigative actions and exposure pathways, deterministic and stochastic health effects, and economic costs. No other U.S. code that is publicly available at present offers all these capabilities. MACCS2 was developed as a general-purpose tool applicable to diverse reactor and nonreactor facilities licensed by the Nuclear Regulatory Commission or operated by the Department of Energy or the Department of Defense. The MACCS2 package includes three primary enhancements: (1) a more flexible emergency-response model, (2) an expanded library of radionuclides, and (3) a semidynamic food-chain model. Other improvements are in the areas of phenomenological modeling and new output options. Initial installation of the code, written in FORTRAN 77, requires a 486 or higher IBM-compatible PC with 8 MB of RAM.

  14. Plant maintenance and plant life extension issue, 2009

    SciTech Connect (OSTI)

    Agnihotri, Newal

    2009-03-15

    The focus of the March-April issue is on plant maintenance and plant life extension. Major articles include the following: Application of modeling and simulation to nuclear power plants, by Berry Gibson, IBM, and Rolf Gibbels, Dassault Systems; Steam generators with tight manufacturing procedures, by Ei Kadokami, Mitsubishi Heavy Industries; SG design based on operational experience and R and D, by Jun Tang, Babcock and Wilcox Canada; Confident to deliver reliable performance, by Bruce Bevilacqua, Westinghouse Nuclear; An evolutionary plant design, by Martin Parece, AREVA NP, Inc.; and, Designed for optimum production, by Danny Roderick, GE Hitachi Nuclear Energy. Industry Innovation articles include: Controlling alloy 600 degradation, by John Wilson, Exelon Nuclear Corporation; Condensate polishing innovation, by Lewis Crone, Dominion Millstone Power Station; Reducing deposits in steam generators, by the Electric Power Research Institute; and, Minimizing Radiological effluent releases, by the Electric Power Research Institute. The plant profile article is titled 2008 - a year of 'firsts' for AmerenUE's Callaway plant, by Rick Eastman, AmerenUE.

  15. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    SciTech Connect (OSTI)

    O'Brien, M. J.; Brantley, P. S.

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.

  16. Portable, parallel, reusable Krylov space codes

    SciTech Connect (OSTI)

    Smith, B.; Gropp, W.

    1994-12-31

    Krylov space accelerators are an important component of many algorithms for the iterative solution of linear systems. Each Krylov space method has it`s own particular advantages and disadvantages, therefore it is desirable to have a variety of them available all with an identical, easy to use, interface. A common complaint application programmers have with available software libraries for the iterative solution of linear systems is that they require the programmer to use the data structures provided by the library. The library is not able to work with the data structures of the application code. Hence, application programmers find themselves constantly recoding the Krlov space algorithms. The Krylov space package (KSP) is a data-structure-neutral implementation of a variety of Krylov space methods including preconditioned conjugate gradient, GMRES, BiCG-Stab, transpose free QMR and CGS. Unlike all other software libraries for linear systems that the authors are aware of, KSP will work with any application codes data structures, in Fortran or C. Due to it`s data-structure-neutral design KSP runs unchanged on both sequential and parallel machines. KSP has been tested on workstations, the Intel i860 and Paragon, Thinking Machines CM-5 and the IBM SP1.

  17. Computer security plan development using an expert system

    SciTech Connect (OSTI)

    Hunteman, W.J. ); Evans, R.; Brownstein, M.; Chapman, L. )

    1990-01-01

    The Computer Security Plan Assistant (SPA) is an expert system for reviewing Department of Energy (DOE) Automated Data Processing (ADP) Security Plans. DOE computer security policies require ADP security plans to be periodically reviewed and updated by all DOE sites. SPA is written in XI-Plus, an expert system shell. SPA was developed by BDM International, Inc., under sponsorship by the DOE Center for Computer Security at Los Alamos National Laboratory. SPA runs on an IBM or compatible personal computer. It presents a series of questions about the ADP security plan being reviewed. The SPA user references the ADP Security Plan and answers the questions. The SPA user reviews each section of the security plan, in any order, until all sections have been reviewed. The SPA user can stop the review process after any section and restart later. A Security Plan Review Report is available after the review of each section of the Security Plan. The Security Plan Review Report gives the user a written assessment of the completeness of the ADP Security Plan. SPA is being tested at Los Alamos and will soon be available to the DOE community.

  18. Users manual for the Chameleon parallel programming tools

    SciTech Connect (OSTI)

    Gropp, W.; Smith, B.

    1993-06-01

    Message passing is a common method for writing programs for distributed-memory parallel computers. Unfortunately, the lack of a standard for message passing has hampered the construction of portable and efficient parallel programs. In an attempt to remedy this problem, a number of groups have developed their own message-passing systems, each with its own strengths and weaknesses. Chameleon is a second-generation system of this type. Rather than replacing these existing systems, Chameleon is meant to supplement them by providing a uniform way to access many of these systems. Chameleon`s goals are to (a) be very lightweight (low over-head), (b) be highly portable, and (c) help standardize program startup and the use of emerging message-passing operations such as collective operations on subsets of processors. Chameleon also provides a way to port programs written using PICL or Intel NX message passing to other systems, including collections of workstations. Chameleon is tracking the Message-Passing Interface (MPI) draft standard and will provide both an MPI implementation and an MPI transport layer. Chameleon provides support for heterogeneous computing by using p4 and PVM. Chameleon`s support for homogeneous computing includes the portable libraries p4, PICL, and PVM and vendor-specific implementation for Intel NX, IBM EUI (SP-1), and Thinking Machines CMMD (CM-5). Support for Ncube and PVM 3.x is also under development.

  19. Global-Address Space Networking (GASNet) Library

    Energy Science and Technology Software Center (OSTI)

    2011-04-06

    GASNet (Global-Address Space Networking) is a language-independent, low-level networking layer that provides network-independent, high-performance communication primitives tailored for implementing parallel global address space SPMD languages such as UPC and Titanium. The interface is primarily intended as a compilation target and for use by runtime library writers (as opposed to end users), and the primary goals are high performance, interface portability, and expressiveness. GASNet is designed specifically to support high-performance, portable implementations of global address spacemore » languages on modern high-end communication networks. The interface provides the flexibility and extensibility required to express a wide variety of communication patterns without sacrificing performance by imposing large computational overheads in the interface. The design of the GASNet interface is partitioned into two layers to maximize porting ease without sacrificing performance: the lower level is a narrow but very general interface called the GASNet core API - the design is basedheavily on Active Messages, and is implemented directly on top of each individual network architecture. The upper level is a wider and more expressive interface called GASNet extended API, which provides high-level operations such as remote memory access and various collective operations. This release implements GASNet over MPI, the Quadrics "elan" API, the Myrinet "GM" API and the "LAPI" interface to the IBM SP switch. A template is provided for adding support for additional network interfaces.« less

  20. MPH: A Library for Distributed Multi-Component Environment

    Energy Science and Technology Software Center (OSTI)

    2001-05-01

    A growing trend in developing large and complex applications on today's Teraflops compyters is to integrate stand-alone and/or semi-independent program components into a comprehensive simulation package. We develop MPH, a multi-component handshaking library that allows component models recognize and talk to each other in a convenient and consisten way, thus to run multi-component ulti-executable applications effectively on distributed memory architectures. MPH provides the following capabilities: component name registration, resource allocation, inter-component communication, inquiry on themore » multi-component environment, standard in/out redirect. It supports the following four integration mechanisms: Multi-Component Single-Executable (MCSE); Single-Component Multi-Executable (SCME); Multi-Component Multi-Executable (MCME); Multi-instance Multi-Executable (MIME). MPH currently works on IBM SP, SGI Origin, Compaq AlphaSC, Cray T3E, and PC clusters. It is being adopted in NCAR's CCSM and Colorado State University's icosahedra grid coupled model. A joint communicator between any two components could be created. MPI communication between local processors and remote processors are invoked through component names and the local id. More functions are available to inquire the global-id, local-id, number of executales, etc.« less

  1. Status of the MORSE multigroup Monte Carlo radiation transport code

    SciTech Connect (OSTI)

    Emmett, M.B.

    1993-06-01

    There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.

  2. Application of expert systems for diagnosing equipment failures at central energy plants. Final report

    SciTech Connect (OSTI)

    Moshage, R.; Kantamneni, M.; Schanche, G.; Metea, M.; Blazek, C.

    1993-12-01

    The growing cost of operating and maintaining its central heating plants (CHPs) has forced the Army to seek alternatives to traditional methods of running these facilities. Computer technology offers the potential to automate and assist in many tasks, such as in the diagnosis of equipment malfunctions and failures in Army CHPs. An automated diagnostic tool for heating plant equipment could lower the cost of human labor by freeing personnel for higher priority work. Automatic diagnosis of problems could also reduce downtime for repair, promote thermal efficiency, and improve on-line reliability. Researchers at the U.S. Army Construction Engineering Research Laboratories (USACERL) investigated the application of artificial intelligence (AI) using knowledge-based expert systems to the monitoring and diagnosing of CHP boiler operations. A prototype system (MAD) was developed to Monitor And Diagnose boiler failure or identify inefficient operation, and recommend action to optimize combustion efficiency. The system includes a knowledge base containing rules for diagnosing the condition of major package boiler components. Minimum system requirements for MAD are an IBM-compatible AT-class personal computer (PC) with 640K base memory and 1 megabyte extended memory, 1.5 megabytes of free hard drive space, a color graphics adaptor (CGA), and DOS 3.0 (or higher).

  3. Coal Preparation Plant Simulation

    Energy Science and Technology Software Center (OSTI)

    1992-02-25

    COALPREP assesses the degree of cleaning obtained with different coal feeds for a given plant configuration and mode of operation. It allows the user to simulate coal preparation plants to determine an optimum plant configuration for a given degree of cleaning. The user can compare the performance of alternative plant configurations as well as determine the impact of various modes of operation for a proposed configuration. The devices that can be modelled include froth flotationmore » devices, washers, dewatering equipment, thermal dryers, rotary breakers, roll crushers, classifiers, screens, blenders and splitters, and gravity thickeners. The user must specify the plant configuration and operating conditions and a description of the coal feed. COALPREP then determines the flowrates within the plant and a description of each flow stream (i.e. the weight distribution, percent ash, pyritic sulfur and total sulfur, moisture, BTU content, recoveries, and specific gravity of separation). COALPREP also includes a capability for calculating the cleaning cost per ton of coal. The IBM PC version contains two auxiliary programs, DATAPREP and FORLIST. DATAPREP is an interactive preprocessor for creating and editing COALPREP input data. FORLIST converts carriage-control characters in FORTRAN output data to ASCII line-feed (X''0A'') characters.« less

  4. A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; Shephard, Mark S.

    2013-01-01

    Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order inmore » a 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less

  5. Petascale algorithms for reactor hydrodynamics.

    SciTech Connect (OSTI)

    Fischer, P.; Lottes, J.; Pointer, W. D.; Siegel, A.

    2008-01-01

    We describe recent algorithmic developments that have enabled large eddy simulations of reactor flows on up to P = 65, 000 processors on the IBM BG/P at the Argonne Leadership Computing Facility. Petascale computing is expected to play a pivotal role in the design and analysis of next-generation nuclear reactors. Argonne's SHARP project is focused on advanced reactor simulation, with a current emphasis on modeling coupled neutronics and thermal-hydraulics (TH). The TH modeling comprises a hierarchy of computational fluid dynamics approaches ranging from detailed turbulence computations, using DNS (direct numerical simulation) and LES (large eddy simulation), to full core analysis based on RANS (Reynolds-averaged Navier-Stokes) and subchannel models. Our initial study is focused on LES of sodium-cooled fast reactor cores. The aim is to leverage petascale platforms at DOE's Leadership Computing Facilities (LCFs) to provide detailed information about heat transfer within the core and to provide baseline data for less expensive RANS and subchannel models.

  6. One-Dimensional Heat Conduction

    Energy Science and Technology Software Center (OSTI)

    1992-03-09

    ICARUS-LLNL was developed to solve one-dimensional planar, cylindrical, or spherical conduction heat transfer problems. The IBM PC version is a family of programs including ICARUSB, an interactive BASIC heat conduction program; ICARUSF, a FORTRAN heat conduction program; PREICAR, a BASIC preprocessor for ICARUSF; and PLOTIC and CPLOTIC, interpretive BASIC and compiler BASIC plot postprocessor programs. Both ICARUSB and ICARUSF account for multiple material regions and complex boundary conditions, such as convection or radiation. In addition,more » ICARUSF accounts for temperature-dependent material properties and time or temperature-dependent boundary conditions. PREICAR is a user-friendly preprocessor used to generate or modify ICARUSF input data. PLOTIC and CPLOTIC generate plots of the temperature or heat flux profile at specified times, plots of the variation of temperature or heat flux with time at selected nodes, or plots of the solution grid. First developed in 1974 to allow easy modeling of complex one-dimensional systems, its original application was in the nuclear explosive testing program. Since then it has undergone extensive revision and been applied to problems dealing with laser fusion target fabrication, heat loads on underground tests, magnetic fusion switching tube anodes, and nuclear waste isolation canisters.« less

  7. A spreadsheet-coupled SOLGAS: A computerized thermodynamic equilibrium calculation tool. Revision 1

    SciTech Connect (OSTI)

    Trowbridge, L.D.; Leitnaker, J.M.

    1995-07-01

    SOLGAS, an early computer program for calculating equilibrium in a chemical system, has been made more user-friendly, and several ``bells and whistles`` have been added. The necessity to include elemental species has been eliminated. The input of large numbers of starting conditions has been automated. A revised spreadsheet-based format for entering data, including non-ideal binary and ternary mixtures, simplifies and reduces chances for error. Calculational errors by SOLGAS are flagged, and several programming errors are corrected. Auxiliary programs are available to assemble and partially automate plotting of large amounts of data. Thermodynamic input data can be changed on line. The program can be operated with or without a co-processor. Copies of the program, suitable for the IBM-PC or compatibles with at least 384 bytes of low RAM, are available from the authors. This user manual contains appendices with examples of the use of SOLGAS. These range from elementary examples, such as, the relationships among water, ice, and water vapor, to more complex systems: phase diagram calculation of UF{sub 4} and UF{sub 6} system; burning UF{sub 4} in fluorine; thermodynamic calculation of the Cl-F-O-H system; equilibria calculations in the CCl{sub 4}--CH{sub 3}OH system; and limitations applicable to aqueous solutions. An appendix also contains the source code.

  8. Prototype prosperity-diversity game for the Laboratory Development Division of Sandia National Laboratories

    SciTech Connect (OSTI)

    VanDevender, P.; Berman, M.; Savage, K.

    1996-02-01

    The Prosperity Game conducted for the Laboratory Development Division of National Laboratories on May 24--25, 1995, focused on the individual and organizational autonomy plaguing the Department of Energy (DOE)-Congress-Laboratories` ability to manage the wrenching change of declining budgets. Prosperity Games are an outgrowth and adaptation of move/countermove and seminar War Games. Each Prosperity Game is unique in that both the game format and the player contributions vary from game to game. This particular Prosperity Game was played by volunteers from Sandia National Laboratories, Eastman Kodak, IBM, and AT&T. Since the participants fully control the content of the games, the specific outcomes will be different when the team for each laboratory, Congress, DOE, and the Laboratory Operating Board (now Laboratory Operations Board) is composed of executives from those respective organizations. Nevertheless, the strategies and implementing agreements suggest that the Prosperity Games stimulate cooperative behaviors and may permit the executives of the institutions to safely explore the consequences of a family of DOE concert.

  9. High-speed imaging of blood splatter patterns

    SciTech Connect (OSTI)

    McDonald, T.E.; Albright, K.A.; King, N.S.P.; Yates, G.J.; Levine, G.F.

    1993-05-01

    The interpretation of blood splatter patterns is an important element in reconstructing the events and circumstances of an accident or crime scene. Unfortunately, the interpretation of patterns and stains formed by blood droplets is not necessarily intuitive and study and analysis are required to arrive at a correct conclusion. A very useful tool in the study of blood splatter patterns is high-speed photography. Scientists at the Los Alamos National Laboratory, Department of Energy (DOE), and Bureau of Forensic Services, State of California, have assembled a high-speed imaging system designed to image blood splatter patterns. The camera employs technology developed by Los Alamos for the underground nuclear testing program and has also been used in a military mine detection program. The camera uses a solid-state CCD sensor operating at approximately 650 frames per second (75 MPixels per second) with a microchannel plate image intensifier that can provide shuttering as short as 5 ns. The images are captured with a laboratory high-speed digitizer and transferred to an IBM compatible PC for display and hard copy output for analysis. The imaging system is described in this paper.

  10. High-speed imaging of blood splatter patterns

    SciTech Connect (OSTI)

    McDonald, T.E.; Albright, K.A.; King, N.S.P.; Yates, G.J. ); Levine, G.F. . Bureau of Forensic Services)

    1993-01-01

    The interpretation of blood splatter patterns is an important element in reconstructing the events and circumstances of an accident or crime scene. Unfortunately, the interpretation of patterns and stains formed by blood droplets is not necessarily intuitive and study and analysis are required to arrive at a correct conclusion. A very useful tool in the study of blood splatter patterns is high-speed photography. Scientists at the Los Alamos National Laboratory, Department of Energy (DOE), and Bureau of Forensic Services, State of California, have assembled a high-speed imaging system designed to image blood splatter patterns. The camera employs technology developed by Los Alamos for the underground nuclear testing program and has also been used in a military mine detection program. The camera uses a solid-state CCD sensor operating at approximately 650 frames per second (75 MPixels per second) with a microchannel plate image intensifier that can provide shuttering as short as 5 ns. The images are captured with a laboratory high-speed digitizer and transferred to an IBM compatible PC for display and hard copy output for analysis. The imaging system is described in this paper.

  11. Modular System for Neutronics Calculations of Fission Reactors, Fusion Blankets, and Other Systems.

    Energy Science and Technology Software Center (OSTI)

    1999-07-23

    AUS is a neutronics code system which may be used for calculations of a wide range of fission reactors, fusion blankets and other neutron applications. The present version, AUS98, has a nuclear cross section library based on ENDF/B-VI and includes modules which provide for reactor lattice calculations, one-dimensional transport calculations, multi-dimensional diffusion calculations, cell and whole reactor burnup calculations, and flexible editing of results. Calculations of multi-region resonance shielding, coupled neutron and photon transport, energymore » deposition, fission product inventory and neutron diffusion are combined within the one code system. The major changes from the previous release, AUS87, are the inclusion of a cross-section library based on ENDF/B-VI, the addition of the POW3D multi-dimensional diffusion module, the addition of the MICBURN module for controlling whole reactor burnup calculations, and changes to the system as a consequence of moving from IBM mainframe computers to UNIX workstations.« less

  12. Karlsruhe Database for Radioactive Wastes (KADABRA) - Accounting and Management System for Radioactive Waste Treatment - 12275

    SciTech Connect (OSTI)

    Himmerkus, Felix; Rittmeyer, Cornelia [WAK Rueckbau- und Entsorgungs- GmbH, 76339 Eggenstein-Leopoldshafen (Germany)

    2012-07-01

    The data management system KADABRA was designed according to the purposes of the Cen-tral Decontamination Department (HDB) of the Wiederaufarbeitungsanlage Karlsruhe Rueckbau- und Entsorgungs-GmbH (WAK GmbH), which is specialized in the treatment and conditioning of radioactive waste. The layout considers the major treatment processes of the HDB as well as regulatory and legal requirements. KADABRA is designed as an SAG ADABAS application on IBM system Z mainframe. The main function of the system is the data management of all processes related to treatment, transfer and storage of radioactive material within HDB. KADABRA records the relevant data concerning radioactive residues, interim products and waste products as well as the production parameters relevant for final disposal. Analytical data from the laboratory and non destructive assay systems, that describe the chemical and radiological properties of residues, production batches, interim products as well as final waste products, can be linked to the respective dataset for documentation and declaration. The system enables the operator to trace the radioactive material through processing and storage. Information on the actual sta-tus of the material as well as radiological data and storage position can be gained immediately on request. A variety of programs accessed to the database allow the generation of individual reports on periodic or special request. KADABRA offers a high security standard and is constantly adapted to the recent requirements of the organization. (authors)

  13. Expert systems applied to two problems in nuclear power plants

    SciTech Connect (OSTI)

    Kim, K.Y.

    1988-01-01

    This dissertation describes two prototype expert systems applied to two problems in nuclear power plants. One problem is spare parts inventory control, and the other one is radionuclide release from containment during severe accident. The expert system for spare parts inventory control can handle spare parts requirements not only in corrective, preventive, or predictive maintenance, but also when failure rates of components or parts are updated by new data. Costs and benefits of spare parts inventory acquisition are evaluated with qualitative attributes such as spare part availability to provide the inventory manager with an improved basis for decision making. The expert system is implemented with Intelligence/Compiler on an IBM-AT. The other expert system for radionuclide release from containment can estimate magnitude, type, location, and time of release of radioactive materials from containment during a severe accident nearly on line, based on the actual measured physical parameters such as temperature and pressure inside the containment. The expert system has a function to check the validation of sensor data. The expert system is implemented with KEE on a Symbolics LISP machine.

  14. A performance comparison of current HPC systems: Blue Gene/Q, Cray XE6 and InfiniBand systems

    SciTech Connect (OSTI)

    Kerbyson, Darren J.; Barker, Kevin J.; Vishnu, Abhinav; Hoisie, Adolfy

    2014-01-01

    We present here a performance analysis of three of current architectures that have become commonplace in the High Performance Computing world. Blue Gene/Q is the third generation of systems from IBM that use modestly performing cores but at large-scale in order to achieve high performance. The XE6 is the latest in a long line of Cray systems that use a 3-D topology but the first to use its Gemini interconnection network. InfiniBand provides the flexibility of using compute nodes from many vendors that can be connected in many possible topologies. The performance characteristics of each vary vastly, and the way in which nodes are allocated in each type of system can significantly impact on achieved performance. In this work we compare these three systems using a combination of micro-benchmarks and a set of production applications. In addition we also examine the differences in performance variability observed on each system and quantify the lost performance using a combination of both empirical measurements and performance models. Our results show that significant performance can be lost in normal production operation of the Cray XE6 and InfiniBand Clusters in comparison to Blue Gene/Q.

  15. EAGLES 1.1: A microcomputer software package for analyzing fuel efficiency of electric and gasoline vehicles

    SciTech Connect (OSTI)

    Marr, W.M.

    1994-05-15

    As part of the U.S. Department of Energy`s electric/hybrid vehicle research program, Argonne National Laboratory has developed a computer software package called EAGLES. This paper describes the capability of the software and its many features and potential applications. EAGLES version 1.1 is an interactive microcomputer software package for the analysis of battery performance in electric-vehicle applications, or the estimation of fuel economy for a gasoline vehicle. The principal objective of the electric-vehicle analysis is to enable the prediction of electric-vehicle performance (e.g., vehicle range) on the basis of laboratory test data for batteries. The model provides a second-by-second simulation of battery voltage and current for any specified velocity/time or power/time profile, taking into consideration the effects of battery depth-of-discharge and regenerative braking. Alternatively, the software package can be used to determine the size of the battery needed to satisfy given vehicle mission requirements (e.g., range and driving patterns). For gasoline-vehicle analysis, an empirical model relating fuel economy, vehicle parameters, and driving-cycle characteristics is included in the software package. For both types of vehicles, effects of heating/cooling loads on vehicle performance can be simulated. The software package includes many default data sets for vehicles, driving cycles, and battery technologies. EAGLES 1.1 is written in the FORTRAN language for use on IBM-compatible microcomputers.

  16. Self-propelled in-tube shuttle and control system for automated measurements of magnetic field alignment

    SciTech Connect (OSTI)

    Boroski, W.N.; Nicol, T.H. ); Pidcoe, S.V. . Space Systems Div.); Zink, R.A. )

    1990-03-01

    A magnetic field alignment gauge is used to measure the field angle as a function of axial position in each of the magnets for the Superconducting Super Collider (SSC). Present measurements are made by manually pushing the through the magnet bore tube and stopping at intervals to record field measurements. Gauge location is controlled through graduation marks and alignment pins on the push rods. Field measurements are recorded on a logging multimeter with tape output. Described is a computerized control system being developed to replace the manual procedure for field alignment measurements. The automated system employs a pneumatic walking device to move the measurement gauge through the bore tube. Movement of the device, called the Self-Propelled In-Tube Shuttle (SPITS), is accomplished through an integral, gas driven, double-acting cylinder. The motion of the SPITS is transferred to the bore tube by means of a pair of controlled, retractable support feet. Control of the SPITS is accomplished through an RS-422 interface from an IBM-compatible computer to a series of solenoid-actuated air valves. Direction of SPITS travel is determined by the air-valve sequence, and is managed through the control software. Precise axial position of the gauge within the magnet is returned to the control system through an optically-encoded digital position transducer attached to the shuttle. Discussed is the performance of the transport device and control system during preliminary testing of the first prototype shuttle. 1 ref., 7 figs.

  17. Engineering Design Information System (EDIS)

    SciTech Connect (OSTI)

    Smith, P.S.; Short, R.D.; Schwarz, R.K.

    1990-11-01

    This manual is a guide to the use of the Engineering Design Information System (EDIS) Phase I. The system runs on the Martin Marietta Energy Systems, Inc., IBM 3081 unclassified computer. This is the first phase in the implementation of EDIS, which is an index, storage, and retrieval system for engineering documents produced at various plants and laboratories operated by Energy Systems for the Department of Energy. This manual presents on overview of EDIS, describing the system's purpose; the functions it performs; hardware, software, and security requirements; and help and error functions. This manual describes how to access EDIS and how to operate system functions using Database 2 (DB2), Time Sharing Option (TSO), Interactive System Productivity Facility (ISPF), and Soft Master viewing features employed by this system. Appendix A contains a description of the Soft Master viewing capabilities provided through the EDIS View function. Appendix B provides examples of the system error screens and help screens for valid codes used for screen entry. Appendix C contains a dictionary of data elements and descriptions.

  18. An update on modeling land-ice/ocean interactions in CESM

    SciTech Connect (OSTI)

    Asay-davis, Xylar

    2011-01-24

    This talk is an update on ongoing land-ice/ocean coupling work within the Community Earth System Model (CESM). The coupling method is designed to allow simulation of a fully dynamic ice/ocean interface, while requiring minimal modification to the existing ocean model (the Parallel Ocean Program, POP). The method makes use of an immersed boundary method (IBM) to represent the geometry of the ice-ocean interface without requiring that the computational grid be modified in time. We show many of the remaining development challenges that need to be addressed in order to perform global, century long climate runs with fully coupled ocean and ice sheet models. These challenges include moving to a new grid where the computational pole is no longer at the true south pole and several changes to the coupler (the software tool used to communicate between model components) to allow the boundary between land and ocean to vary in time. We discuss benefits for ice/ocean coupling that would be gained from longer-term ocean model development to allow for natural salt fluxes (which conserve both water and salt mass, rather than water volume).

  19. Department of Defense (DOD) renewables and energy efficiency planning (REEP) program manual

    SciTech Connect (OSTI)

    Nemeth, R.J.; Fournier, D.; Debaillie, L.; Edgar, L.; Stroot, P.; Beasley, R.; Edgar, D.; McMillen, L.; Marren, M.

    1995-08-01

    The Renewables and Energy Efficiency Planning (REEP) program was developed at the US Army Construction Engineering Research Laboratories (USACERL). This program allows for the analysis of 78 energy and water conservation opportunities at 239 major DOD installations. REEP uses a series of algorithms in conjunction with installation specific data to estimate the energy and water conservation potential for entire installations. The program provides the energy, financial, pollution, and social benefits of conservation initiatives. The open architecture of the program allows for simple modification of energy and water conservation variables, and installation database values to allow for individualized analysis. The program is essentially a high-level screening tool that can be used to help identify and focus preliminary conservation studies. The REEP program requires an IBM PC or compatible with a 80386 or 80486 microprocessor. It also requires approximately 4 megabytes of disk space and at least 8 megabytes of RAM. The system was developed for a Windows environment and requires Microsoft Windows 3.1{trademark} or higher to run properly.

  20. Life-Cycle Cost Analysis for Utility Combinations (LCCA) (for microcomputers). Software

    SciTech Connect (OSTI)

    Corin, N.

    1989-09-01

    The Life-Cycle Cost Analysis for Utility Combinations (LCCA) system evaluates housing project utility systems. The system determines the cost-effectiveness and aids in the selection of the utility combination with the lowest life-cycle cost. Because of the large number of possible combinations of fuels, purchasing methods, types of installations and utility rates, a systematic analysis of costs must be made. The choice of utilities may substantially influence construction cost. LCCA calculates initial and monthly costs of both individual dwelling units and project totals. Therefore, the LCCA system calculates costs for four combinations of fuel/energy. LCCA analyzes the following four utility combinations: Combination 1--Electricity; Combination 2--Electricity and Gas; Combination 3--Electricity and Oil; and Combination 4--Electricity, Gas and Oil. Software Description: The software is written in the Lotus 1-2-3 programming language for implementation on an IBM PC microcomputer using Lotus 1-2-3. Software requires 160K of disk storage, with a hard disk and one floppy or two floppy disk drives.

  1. WINDOW 4.0: Program description. A PC program for analyzing the thermal performance of fenestration products

    SciTech Connect (OSTI)

    Not Available

    1992-03-01

    WINDOW 4.0 is a publicly available IBM PC compatible computer program developed by the Windows and Daylighting Group at Lawrence Berkeley Laboratory for calculating total window thermal performance indices (e.g. U-values, solar heat gain coefficients, shading coefficients, and visible transmittances). WINDOW 4.0 provides a versatile heat transfer analysis method consistent with the rating procedure developed by the National Fenestration Rating Council (NFRC). The program can be used to design and develop new products, to rate and compare performance characteristics of all types of window products, to assist educators in teaching heat transfer through windows, and to help public officials in developing building energy codes. WINDOW 4.0 is a major revision to WINDOW 3.1 and we strongly urge all users to read this manual before using the program. Users who need professional assistance with the WINDOW 4.0 program or other window performance simulation issues are encouraged to contact one or more of the NFRC-accredited Simulation Laboratories. A list of these accredited simulation professionals is available from the NFRC.

  2. WINDOW 4. 0: Program description. A PC program for analyzing the thermal performance of fenestration products

    SciTech Connect (OSTI)

    Not Available

    1992-03-01

    WINDOW 4.0 is a publicly available IBM PC compatible computer program developed by the Windows and Daylighting Group at Lawrence Berkeley Laboratory for calculating total window thermal performance indices (e.g. U-values, solar heat gain coefficients, shading coefficients, and visible transmittances). WINDOW 4.0 provides a versatile heat transfer analysis method consistent with the rating procedure developed by the National Fenestration Rating Council (NFRC). The program can be used to design and develop new products, to rate and compare performance characteristics of all types of window products, to assist educators in teaching heat transfer through windows, and to help public officials in developing building energy codes. WINDOW 4.0 is a major revision to WINDOW 3.1 and we strongly urge all users to read this manual before using the program. Users who need professional assistance with the WINDOW 4.0 program or other window performance simulation issues are encouraged to contact one or more of the NFRC-accredited Simulation Laboratories. A list of these accredited simulation professionals is available from the NFRC.

  3. Scalable Equation of State Capability

    SciTech Connect (OSTI)

    Epperly, T W; Fritsch, F N; Norquist, P D; Sanford, L A

    2007-12-03

    The purpose of this techbase project was to investigate the use of parallel array data types to reduce the memory footprint of the Livermore Equation Of State (LEOS) library. Addressing the memory scalability of LEOS is necessary to run large scientific simulations on IBM BG/L and future architectures with low memory per processing core. We considered using normal MPI, one-sided MPI, and Global Arrays to manage the distributed array and ended up choosing Global Arrays because it was the only communication library that provided the level of asynchronous access required. To reduce the runtime overhead using a parallel array data structure, a least recently used (LRU) caching algorithm was used to provide a local cache of commonly used parts of the parallel array. The approach was initially implemented in a isolated copy of LEOS and was later integrated into the main trunk of the LEOS Subversion repository. The approach was tested using a simple test. Testing indicated that the approach was feasible, and the simple LRU caching had a 86% hit rate.

  4. Software Roadmap to Plug and Play Petaflop/s

    SciTech Connect (OSTI)

    Kramer, Bill; Carter, Jonathan; Skinner, David; Oliker, Lenny; Husbands, Parry; Hargrove, Paul; Shalf, John; Marques, Osni; Ng, Esmond; Drummond, Tony; Yelick, Kathy

    2006-07-31

    In the next five years, the DOE expects to build systemsthat approach a petaflop in scale. In the near term (two years), DOE willhave several near-petaflops systems that are 10 percent to 25 percent ofa peraflop-scale system. A common feature of these precursors to petaflopsystems (such as the Cray XT3 or the IBM BlueGene/L) is that they rely onan unprecedented degree of concurrency, which puts stress on every aspectof HPC system design. Such complex systems will likely break current bestpractices for fault resilience, I/O scaling, and debugging, and evenraise fundamental questions about languages and application programmingmodels. It is important that potential problems are anticipated farenough in advance that they can be addressed in time to prepare the wayfor petaflop-scale systems. This report considers the following fourquestions: (1) What software is on a critical path to make the systemswork? (2) What are the strengths/weaknesses of the vendors and ofexisting vendor solutions? (3) What are the local strengths at the labs?(4) Who are other key players who will play a role and canhelp?

  5. Diagnosing the Causes and Severity of One-sided Message Contention

    SciTech Connect (OSTI)

    Tallent, Nathan R.; Vishnu, Abhinav; van Dam, Hubertus; Daily, Jeffrey A.; Kerbyson, Darren J.; Hoisie, Adolfy

    2015-02-11

    Two trends suggest network contention for one-sided messages is poised to become a performance problem that concerns application developers: an increased interest in one-sided programming models and a rising ratio of hardware threads to network injection bandwidth. Unfortunately, it is difficult to reason about network contention and one-sided messages because one-sided tasks can either decrease or increase contention. We present effective and portable techniques for diagnosing the causes and severity of one-sided message contention. To detect that a message is affected by contention, we maintain statistics representing instantaneous (non-local) network resource demand. Using lightweight measurement and modeling, we identify the portion of a message's latency that is due to contention and whether contention occurs at the initiator or target. We attribute these metrics to program statements in their full static and dynamic context. We characterize contention for an important computational chemistry benchmark on InfiniBand, Cray Aries, and IBM Blue Gene/Q interconnects. We pinpoint the sources of contention, estimate their severity, and show that when message delivery time deviates from an ideal model, there are other messages contending for the same network links. With a small change to the benchmark, we reduce contention up to 50% and improve total runtime as much as 20%.

  6. Resonance parameter analysis with SAMMY

    SciTech Connect (OSTI)

    Larson, N.M.; Perey, F.G.

    1988-01-01

    The multilevel R-matrix computer code SAMMY has evolved over the past decade to become an important analysis tool for neutron data. SAMMY uses the Reich-Moore approximation to the multilevel R-matrix and includes an optional logarithmic parameterization of the external R-function. Doppler broadening is simulated either by numerical integration using the Gaussian approximation to the free gas model or by a more rigorous solution of the partial differential equation equivalent to the exact free gas model. Resolution broadening of cross sections and derivatives also has new options that more accurately represent the experimental situation. SAMMY treats constant normalization and some types of backgrounds directly and treats other normalizations and/or backgrounds with the introduction of user-generated partial derivatives. The code uses Bayes' method as an efficient alternative to least squares for fitting experimental data. SAMMY allows virtually any parameter to be varied and outputs values, uncertainties, and covariance matrix for all varied parameters. Versions of SAMMY exist for VAX, FPS, and IBM computers.

  7. A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations

    SciTech Connect (OSTI)

    Nomura, K; Seymour, R; Wang, W; Kalia, R; Nakano, A; Vashishta, P; Shimojo, F; Yang, L H

    2009-02-17

    A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based on hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).

  8. PDS SHRINK. PDS SHRINK

    SciTech Connect (OSTI)

    Phillion, D.

    1991-12-15

    This code enables one to display, take line-outs on, and perform various transformations on an image created by an array of integer*2 data. Uncompressed eight-bit TIFF files created on either the Macintosh or the IBM PC may also be read in and converted to a 16 bit signed integer image. This code is designed to handle all the formats used for PDS (photo-densitometer) files at the Lawrence Livermore National Laboratory. These formats are all explained by the application code. The image may be zoomed infinitely and the gray scale mapping can be easily changed. Line-outs may be horizontal or vertical with arbitrary width, angled with arbitrary end points, or taken along any path. This code is usually used to examine spectrograph data. Spectral lines may be identified and a polynomial fit from position to wavelength may be found. The image array can be remapped so that the pixels all have the same change of lambda width. It is not necessary to do this, however. Lineouts may be printed, saved as Cricket tab-delimited files, or saved as PICT2 files. The plots may be linear, semilog, or logarithmic with nice values and proper scientific notation. Typically, spectral lines are curved.

  9. Second update The Gordon Bell Competetion entry gb110s2

    SciTech Connect (OSTI)

    Vranas, P; Soltz, R

    2006-11-12

    Since the update to our entry of October 20th we have just made a significant improvement. We understand that this is past the deadline for updates and very close to the conference date. However, Lawrence Livermore National Laboratory has just updated the BG/L system software on their full 64 BG/L supercomputer to IBM-BGL Release 3. As we discussed in our update of October 20 this release includes our custom L1 and SRAM access functions that allow us to achieve higher sustained performance. Just a few hours ago we got access to the full system and obtained the fastest sustained performance point. In the full 131,072 CPU-cores system QCD sustains 70.9 Teraflops for the Dirac operator and 67.9 teraflops for the full Conjugate Gradient inverter. This is about 20% faster than our last update. We attach the corresponding speedup figure. As you can tell the speedup is perfect. This figure is the same as Figure 1 of our October 20th update except that it now includes the 131,072 CPU-cores point.

  10. Final Report: Performance Modeling Activities in PERC2

    SciTech Connect (OSTI)

    Allan Snavely

    2007-02-25

    Progress in Performance Modeling for PERC2 resulted in: • Automated modeling tools that are robust, able to characterize large applications running at scale while simultaneously simulating the memory hierarchies of mul-tiple machines in parallel. • Porting of the requisite tracer tools to multiple platforms. • Improved performance models by using higher resolution memory models that ever before. • Adding control-flow and data dependency analysis to the tracers used in perform-ance tools. • Exploring and developing several new modeling methodologies. • Using modeling tools to develop performance models for strategic codes. • Application of modeling methodology to make a large number of “blind” per-formance predictions on certain mission partner applications, targeting most cur-rently available system architectures. • Error analysis to correct some systematic biases encountered as part of the large-scale blind prediction exercises. • Addition of instrumentation capabilities for communication libraries other than MPI. • Dissemination the tools and modeling methods to several mission partners, in-cluding DoD HPCMO and two DARPA HPCS vendors (Cray and IBM), as well as to the wider HPC community via a series of tutorials.

  11. User`s guide and documentation manual for ``BOAST-VHS for the PC``

    SciTech Connect (OSTI)

    Chang, Ming-Ming; Sarathi, P.; Heemstra, R.J.; Cheng, A.M.; Pautz, J.F.

    1992-01-01

    The recent advancement of computer technology makes reservoir simulations feasible in a personal computer (PC) environment. This manual provides a guide for running BOAST-VHS, a black oil reservoir simulator for vertical/horizontal/slant wells, using a PC. In addition to detailed explanations of input data file preparation for simulation runs, special features of BOAST-VHS are described and three sample problems are presented. BOAST-VHS is a cost-effective and easy-to-use reservoir simulation tool for the study of oil production from primary depletion and waterflooding in a black oil reservoir. The well model in BOAST-VHS permits specification of any combination of horizontal, slanted, and vertical wells in the reservoir. BOAST-VHS was designed for an IBM PC/AT, PS-2, or compatible computer with 640 K bytes of memory. BOAST-VHS can be used to model a three-dimensional reservoir of up to 810 grid blocks with any combination of rows, columns, and layers, depending on the input data supplied. This dynamic redimensioning feature facilitates simulation work by avoiding the need to recompiling the simulator for different reservoir models. Therefore the program is only supplied as executable code without any source code.

  12. User's guide and documentation manual for BOAST-VHS for the PC''

    SciTech Connect (OSTI)

    Chang, Ming-Ming; Sarathi, P.; Heemstra, R.J.; Cheng, A.M.; Pautz, J.F.

    1992-01-01

    The recent advancement of computer technology makes reservoir simulations feasible in a personal computer (PC) environment. This manual provides a guide for running BOAST-VHS, a black oil reservoir simulator for vertical/horizontal/slant wells, using a PC. In addition to detailed explanations of input data file preparation for simulation runs, special features of BOAST-VHS are described and three sample problems are presented. BOAST-VHS is a cost-effective and easy-to-use reservoir simulation tool for the study of oil production from primary depletion and waterflooding in a black oil reservoir. The well model in BOAST-VHS permits specification of any combination of horizontal, slanted, and vertical wells in the reservoir. BOAST-VHS was designed for an IBM PC/AT, PS-2, or compatible computer with 640 K bytes of memory. BOAST-VHS can be used to model a three-dimensional reservoir of up to 810 grid blocks with any combination of rows, columns, and layers, depending on the input data supplied. This dynamic redimensioning feature facilitates simulation work by avoiding the need to recompiling the simulator for different reservoir models. Therefore the program is only supplied as executable code without any source code.

  13. On-line test of signal validation software on the LOBI-MOD2 facility in Ispra, Italy

    SciTech Connect (OSTI)

    Prock, J.; Labeit, M. ); Ohlmer, E. . Joint Research Centre)

    1992-01-01

    A computer program for the detection of abrupt changes in nonhardware redundant measurement signals that uses different methods of analytical redundancy is developed by the Gesellschaft fur Reaktorsicherheit, Garching, Federal Republic of Germany. The program, instrumental fault detection and identification (IFDI) module, validates in real time output signals of power plant components that are scanned at a fixed rate. The IFDI module, implemented on an IBM-compatible personal computer (PC) with an 80386 processor, is tested on-line at the light water reactor off-normal behavior investigations (LOBI-MOD2) facility in the Joint Research Centre, Ispra, Italy, during the loss-of-feedwater experiment BT-15/BT-16 on November 22, 1990. The measurement signals validated by the IFDI module originate from one of the two LOBI-MOD2 facility's steam generators. During the experiment, sensor faults are simulated by falsifying the measurement signals through electrical resistances arranged in series. In this paper questions about the signal validation software and the steam generator's model are dealt with briefly, while the experimental environment and the results obtained are discussed in detail.

  14. Optimization of a Lattice Boltzmann Computation on State-of-the-Art Multicore Platforms

    SciTech Connect (OSTI)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2009-04-10

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon E5345 (Clovertown), AMD Opteron 2214 (Santa Rosa), AMD Opteron 2356 (Barcelona), Sun T5140 T2+ (Victoria Falls), as well as a QS20 IBM Cell Blade. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 15x improvement compared with the original code at a given concurrency. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.

  15. INTERLINE 5.0 -- An expanded railroad routing model: Program description, methodology, and revised user`s manual

    SciTech Connect (OSTI)

    Johnson, P.E.; Joy, D.S.; Clarke, D.B.; Jacobi, J.M.

    1993-03-01

    A rail routine model, INTERLINE, has been developed at the Oak Ridge National Laboratory to investigate potential routes for transporting radioactive materials. In Version 5.0, the INTERLINE routing algorithms have been enhanced to include the ability to predict alternative routes, barge routes, and population statistics for any route. The INTERLINE railroad network is essentially a computerized rail atlas describing the US railroad system. All rail lines, with the exception of industrial spurs, are included in the network. Inland waterways and deep water routes along with their interchange points with the US railroadsystem are also included. The network contains over 15,000 rail and barge segments (links) and over 13,000 stations, interchange points, ports, and other locations (nodes). The INTERLINE model has been converted to operate on an IBM-compatible personal computer. At least a 286 computer with a hard disk containing approximately 6 MB of free space is recommended. Enhanced program performance will be obtained by using arandom-access memory drive on a 386 or 486 computer.

  16. INTERLINE 5. 0 -- An expanded railroad routing model: Program description, methodology, and revised user's manual

    SciTech Connect (OSTI)

    Johnson, P.E.; Joy, D.S. ); Clarke, D.B.; Jacobi, J.M. . Transportation Center)

    1993-03-01

    A rail routine model, INTERLINE, has been developed at the Oak Ridge National Laboratory to investigate potential routes for transporting radioactive materials. In Version 5.0, the INTERLINE routing algorithms have been enhanced to include the ability to predict alternative routes, barge routes, and population statistics for any route. The INTERLINE railroad network is essentially a computerized rail atlas describing the US railroad system. All rail lines, with the exception of industrial spurs, are included in the network. Inland waterways and deep water routes along with their interchange points with the US railroadsystem are also included. The network contains over 15,000 rail and barge segments (links) and over 13,000 stations, interchange points, ports, and other locations (nodes). The INTERLINE model has been converted to operate on an IBM-compatible personal computer. At least a 286 computer with a hard disk containing approximately 6 MB of free space is recommended. Enhanced program performance will be obtained by using arandom-access memory drive on a 386 or 486 computer.

  17. xdamp Version 6 : an IDL-based data and image manipulation program.

    SciTech Connect (OSTI)

    Ballard, William Parker

    2012-04-01

    The original DAMP (DAta Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA{trademark} (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix(reg sign)-based workstations, a replacement was needed. This package uses the IDL(reg sign) software, available from Research Systems Incorporated, a Xerox company, in Boulder, Colorado, as the engine, and creates a set of widgets to manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM(reg sign) workstations, Hewlett Packard workstations, SUN(reg sign) workstations, Microsoft(reg sign) Windows{trademark} computers, Macintosh(reg sign) computers and Digital Equipment Corporation VMS(reg sign) and Alpha(reg sign) systems. Thus, xdamp is portable across many platforms. We have verified operation, albeit with some minor IDL bugs, on personal computers using Windows 7 and Windows Vista; Unix platforms; and Macintosh computers. Version 6 is an update that uses the IDL Virtual Machine to resolve the need for licensing IDL.

  18. Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors

    SciTech Connect (OSTI)

    Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2007-06-01

    Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.

  19. AIX 4.3 Elements of Security Effective and Efficient Implementation

    SciTech Connect (OSTI)

    Kosuge, Yoshimichi; Arminguad, Francois; Chew, Lip-Ping; Horne, Leonie; Witteveen, Timothy A.

    2000-01-01

    This IBM Redbook provides an overview of AIX Version 4.3 security. AIX provides many security features that can be used to improve security. The emphasis is on the practical use of these security features, why they are necessary, and how they can be used in your environment. Also recommended are guidelines and best practices when there are many different ways to achieve a secure system. The exponential growth of networks has caused more and more computers to be connected together, which creates an excellent environment for information exchange and sharing. As an increasingly large amount of confidential information is stored and transmitted over public networks, such as the Internet, it becomes imperative that information security be implemented in an effective and efficient manner. This book covers different aspects of security present in AIX 4.3, including user accounts, file systems, networks, and security management. With its detailed product coverage, this book is intended for experienced AIX system administrators who are taking on the role of security administration. Security administrators who are new to AIX will also find this document useful. The reader is assumed to have a basic working knowledge of UNIX. This book is intended as an additional source of security information, and together with existing sources, may be used to enhance your knowledge of security.

  20. US Army Radiological Bioassay and Dosimetry: The RBD software package

    SciTech Connect (OSTI)

    Eckerman, K. F.; Ward, R. C.; Maddox, L. B.

    1993-01-01

    The RBD (Radiological Bioassay and Dosimetry) software package was developed for the U. S. Army Material Command, Arlington, Virginia, to demonstrate compliance with the radiation protection guidance 10 CFR Part 20 (ref. 1). Designed to be run interactively on an IBM-compatible personal computer, RBD consists of a data base module to manage bioassay data and a computational module that incorporates algorithms for estimating radionuclide intake from either acute or chronic exposures based on measurement of the worker's rate of excretion of the radionuclide or the retained activity in the body. In estimating the intake,RBD uses a separate file for each radionuclide containing parametric representations of the retention and excretion functions. These files also contain dose-per-unit-intake coefficients used to compute the committed dose equivalent. For a given nuclide, if measurements exist for more than one type of assay, an auxiliary module, REPORT, estimates the intake by applying weights assigned in the nuclide file for each assay. Bioassay data and computed results (estimates of intake and committed dose equivalent) are stored in separate data bases, and the bioassay measurements used to compute a given result can be identified. The REPORT module creates a file containing committed effective dose equivalent for each individual that can be combined with the individual's external exposure.

  1. Comparison of open-source linear programming solvers.

    SciTech Connect (OSTI)

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin D.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph

    2013-10-01

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.

  2. User's manual for ONEDANT: a code package for one-dimensional, diffusion-accelerated, neutral-particle transport

    SciTech Connect (OSTI)

    O'Dell, R.D.; Brinkley, F.W. Jr.; Marr, D.R.

    1982-02-01

    ONEDANT is designed for the CDC-7600, but the program has been implemented and run on the IBM-370/190 and CRAY-I computers. ONEDANT solves the one-dimensional multigroup transport equation in plane, cylindrical, spherical, and two-angle plane geometries. Both regular and adjoint, inhomogeneous and homogeneous (k/sub eff/ and eigenvalue search) problems subject to vacuum, reflective, periodic, white, albedo, or inhomogeneous boundary flux conditions are solved. General anisotropic scattering is allowed and anisotropic inhomogeneous sources are permitted. ONEDANT numerically solves the one-dimensional, multigroup form of the neutral-particle, steady-state form of the Boltzmann transport equation. The discrete-ordinates approximation is used for treating the angular variation of the particle distribution and the diamond-difference scheme is used for phase space discretization. Negative fluxes are eliminated by a local set-to-zero-and-correct algorithm. A standard inner (within-group) iteration, outer (energy-group-dependent source) iteration technique is used. Both inner and outer iterations are accelerated using the diffusion synthetic acceleration method. (WHK)

  3. Common Geometry Module

    Energy Science and Technology Software Center (OSTI)

    2005-01-01

    The Common Geometry Module (CGM) is a code library which provides geometry functionality used for mesh generation and other applications. This functionality includes that commonly found in solid modeling engines, like geometry creation, query and modification; CGM also includes capabilities not commonly found in solid modeling engines, like geometry decomposition tools and support for shared material interfaces. CGM is built upon the ACIS solid modeling engine, but also includes geometry capability developed beside and onmore » top of ACIS. CGM can be used as-is to provide geometry functionality for codes needing this capability. However, CGM can also be extended using derived classes in C++, allowing the geometric model to serve as the basis for other applications, for example mesh generation. CGM is supported on Sun Solaris, SGI, HP, IBM, DEC, Linux and Windows NT platforms. CGM also indudes support for loading ACIS models on parallel computers, using MPI-based communication. Future plans for CGM are to port it to different solid modeling engines, including Pro/Engineer or SolidWorks. CGM is being released into the public domain under an LGPL license; the ACIS-based engine is available to ACIS licensees on request.« less

  4. What then do we do about computer security?

    SciTech Connect (OSTI)

    Suppona, Roger A.; Mayo, Jackson R.; Davis, Christopher Edward; Berg, Michael J.; Wyss, Gregory Dane

    2012-01-01

    This report presents the answers that an informal and unfunded group at SNL provided for questions concerning computer security posed by Jim Gosler, Sandia Fellow (00002). The primary purpose of this report is to record our current answers; hopefully those answers will turn out to be answers indeed. The group was formed in November 2010. In November 2010 Jim Gosler, Sandia Fellow, asked several of us several pointed questions about computer security metrics. Never mind that some of the best minds in the field have been trying to crack this nut without success for decades. Jim asked Campbell to lead an informal and unfunded group to answer the questions. With time Jim invited several more Sandians to join in. We met a number of times both with Jim and without him. At Jim's direction we contacted a number of people outside Sandia who Jim thought could help. For example, we interacted with IBM's T.J. Watson Research Center and held a one-day, videoconference workshop with them on the questions.

  5. Characteristics of workload on ASCI blue-pacific at lawrence livermore national laboratory

    SciTech Connect (OSTI)

    Yoo, A B; Jette, M A

    2000-08-14

    Symmetric multiprocessor (SMP) clusters have become the prevalent computing platforms for large-scale scientific computation in recent years mainly due to their good scalability. In fact, many parallel machines being used at supercomputing centers and national laboratories are of this type. It is critical and often very difficult on such large-scale parallel computers to efficiently manage a stream of jobs, whose requirement for resources and computing time greatly varies. Understanding the characteristics of workload imposed on a target environment plays a crucial role in managing system resources and developing an efficient resource management scheme. A parallel workload is analyzed typically by studying the traces from actual production parallel machines. The study of the workload traces not only provides the system designers with insight on how to design good processor allocation and job scheduling policies for efficient resource management, but also helps system administrators monitor and fine-tune the resource management strategies and algorithms. Furthermore, the workload traces are a valuable resource for those who conduct performance studies through either simulation or analytical modeling. The workload traces can be directly fed to a trace-driven simulator in a more realistic and specific simulation experiments. Alternatively, one can obtain certain parameters that characterize the workload by analyzing the traces, and then use them to construct a workload model or to drive a simulation in which a large number of runs are required. Considering these benefits, they collected and analyzed the job traces from ASCI Blue-Pacific, a 336-node IBM SP2 machine at Lawrence Livermore National Laboratory (LLNL). The job traces used span a period of about six months, from October 1999 till the first week of May 2000. The IBM SP2 machine at the LLNL uses gang scheduling LoadLever (GangLL) to manage parallel jobs. User jobs are submitted to the GangLL via a locally

  6. Guide to verification and validation of the SCALE-4 criticality safety software

    SciTech Connect (OSTI)

    Emmett, M.B.; Jordan, W.C.

    1996-12-01

    Whenever a decision is made to newly install the SCALE nuclear criticality safety software on a computer system, the user should run a set of verification and validation (V&V) test cases to demonstrate that the software is properly installed and functioning correctly. This report is intended to serve as a guide for this V&V in that it specifies test cases to run and gives expected results. The report describes the V&V that has been performed for the nuclear criticality safety software in a version of SCALE-4. The verification problems specified by the code developers have been run, and the results compare favorably with those in the SCALE 4.2 baseline. The results reported in this document are from the SCALE 4.2P version which was run on an IBM RS/6000 workstation. These results verify that the SCALE-4 nuclear criticality safety software has been correctly installed and is functioning properly. A validation has been performed for KENO V.a utilizing the CSAS25 criticality sequence and the SCALE 27-group cross-section library for {sup 233}U, {sup 235}U, and {sup 239}Pu fissile, systems in a broad range of geometries and fissile fuel forms. The experimental models used for the validation were taken from three previous validations of KENO V.a. A statistical analysis of the calculated results was used to determine the average calculational bias and a subcritical k{sub eff} criteria for each class of systems validated. Included the statistical analysis is a means of estimating the margin of subcriticality in k{sub eff}. This validation demonstrates that KENO V.a and the 27-group library may be used for nuclear criticality safety computations provided the system being analyzed falls within the range of the experiments used in the validation.

  7. Quantitative genetic activity graphical profiles for use in chemical evaluation

    SciTech Connect (OSTI)

    Waters, M.D.; Stack, H.F.; Garrett, N.E.; Jackson, M.A.

    1990-12-31

    A graphic approach, terms a Genetic Activity Profile (GAP), was developed to display a matrix of data on the genetic and related effects of selected chemical agents. The profiles provide a visual overview of the quantitative (doses) and qualitative (test results) data for each chemical. Either the lowest effective dose or highest ineffective dose is recorded for each agent and bioassay. Up to 200 different test systems are represented across the GAP. Bioassay systems are organized according to the phylogeny of the test organisms and the end points of genetic activity. The methodology for producing and evaluating genetic activity profile was developed in collaboration with the International Agency for Research on Cancer (IARC). Data on individual chemicals were compiles by IARC and by the US Environmental Protection Agency (EPA). Data are available on 343 compounds selected from volumes 1-53 of the IARC Monographs and on 115 compounds identified as Superfund Priority Substances. Software to display the GAPs on an IBM-compatible personal computer is available from the authors. Structurally similar compounds frequently display qualitatively and quantitatively similar profiles of genetic activity. Through examination of the patterns of GAPs of pairs and groups of chemicals, it is possible to make more informed decisions regarding the selection of test batteries to be used in evaluation of chemical analogs. GAPs provided useful data for development of weight-of-evidence hazard ranking schemes. Also, some knowledge of the potential genetic activity of complex environmental mixtures may be gained from an assessment of the genetic activity profiles of component chemicals. The fundamental techniques and computer programs devised for the GAP database may be used to develop similar databases in other disciplines. 36 refs., 2 figs.

  8. [Cyclotron based nuclear science]. Progress in research, April 1, 1992--March 31, 1993

    SciTech Connect (OSTI)

    Not Available

    1993-07-01

    The period 1 April 1992--31 March 1993 saw the initial runs of three new spectrometers, which constitute a major portion of the new detection capabilities developed for this facility. These devices are the Proton Spectrometer (PSP) (data from which are shown on the cover of this document), the Mass Achroniat Recoil Mass Spectrometer (MARS), and the Multipole Dipole Multipole (MDM) Particle Spectrometer. The ECR-K500 cyclotron combination operated 5,849 hours. The beam was on target 39% of this time. Studies of nuclear dynamics and nuclear thermodynamics using the neutron ball have come to fruition. A critical re-evaluation of the available data on the giant monopole resonance indicated that the incompressibility is not specified to a range smaller than 200--350 MeV by those data. New systematic experiments using the MDM spectrometer are now underway. The MEGA collaboration obtained the first data on the {mu} {yields} e{gamma} decay rate and determination of the Michel parameter in normal {mu} decay. Experiments appear to confirm the existence of monoenergetic pair peaks even for relatively low Z{sub projectile} -- Z{sub target} combinations. Studies of the ({alpha},2{alpha}) knockout reaction indicate that this reaction may prove to be a valuable tool for determination of reaction rates of astrophysical interest. Theoretical work reported in this document ranges from nuclear structure calculations using the IBM-2 model to calculations of kaon production and the in-medium properties of the rho and phi mesons. Nuclear dynamics and exotic shapes and fragmentation modes of hot nuclei are also addressed. New measurements of x-ray emission from highly ionized ions, of molecular dissociation and of surface interactions are reported. The research is presented in nearly 50 brief summaries usually including data and references.

  9. Nesting large-eddy simulations within mesoscale simulations for wind energy applications

    SciTech Connect (OSTI)

    Lundquist, J K; Mirocha, J D; Chow, F K; Kosovic, B; Lundquist, K A

    2008-09-08

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES), which resolve individual atmospheric eddies on length scales smaller than turbine blades and account for complex terrain, are possible with a range of commercial and open-source software, including the Weather Research and Forecasting (WRF) model. In addition to 'local' sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting that a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecasting model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain.

  10. Wireless remote radiation monitoring system (WRRMS). Innovative technology summary report

    SciTech Connect (OSTI)

    Not Available

    1998-12-01

    The Science Application International Corporation (SAIC) RadStar{trademark} wireless remote radiation monitoring system (WRRMS) is designed to provide real-time monitoring of the radiation dose to workers as they perform work in radiologically contaminated areas. WRRMS can also monitor dose rates in a room or area. The system uses radio-frequency communications to transmit dose readings from the wireless dosimeters worn by workers to a remote monitoring station that can be located out of the contaminated area. Each base station can monitor up to 16 workers simultaneously. The WRRMS can be preset to trigger both audible and visual alarms at certain dose rates. The alarms are provided to the worker as well as the base station operator. This system is particularly useful when workers are wearing personal protective clothing or respirators that make visual observation of their self-reading dosimeters (SRDs), which are typically used to monitor workers, more difficult. The base station is an IBM-compatible personal computer that updates and records information on individual workers every ten seconds. Although the equipment costs for this improved technology are higher than the SRDs (amortized at $2.54/hr versus $1.02/hr), total operational costs are actually less ($639/day versus $851/day). This is because the WRRMS requires fewer workers to be in the contaminated zone than the traditional (baseline) technology. There are also intangible benefits associated with improved worker safety and as low as reasonably achievable (ALARA) principles, making the WRRMS an attractive alternative to the baseline technology. The baseline technology measures only integrated dose and requires workers to check their own dosimeters manually during the task.

  11. THE LOS ALAMOS NATIONAL LABORATORY ATMOSPHERIC TRANSPORT AND DIFFUSION MODELS

    SciTech Connect (OSTI)

    M. WILLIAMS

    1999-08-01

    The LANL atmospheric transport and diffusion models are composed of two state-of-the-art computer codes. The first is an atmospheric wind model called HOThlAC, Higher Order Turbulence Model for Atmospheric circulations. HOTMAC generates wind and turbulence fields by solving a set of atmospheric dynamic equations. The second is an atmospheric diffusion model called RAPTAD, Random Particle Transport And Diffusion. RAPTAD uses the wind and turbulence output from HOTMAC to compute particle trajectories and concentration at any location downwind from a source. Both of these models, originally developed as research codes on supercomputers, have been modified to run on microcomputers. Because the capability of microcomputers is advancing so rapidly, the expectation is that they will eventually become as good as today's supercomputers. Now both models are run on desktop or deskside computers, such as an IBM PC/AT with an Opus Pm 350-32 bit coprocessor board and a SUN workstation. Codes have also been modified so that high level graphics, NCAR Graphics, of the output from both models are displayed on the desktop computer monitors and plotted on a laser printer. Two programs, HOTPLT and RAPLOT, produce wind vector plots of the output from HOTMAC and particle trajectory plots of the output from RAPTAD, respectively. A third CONPLT provides concentration contour plots. Section II describes step-by-step operational procedures, specifically for a SUN-4 desk side computer, on how to run main programs HOTMAC and RAPTAD, and graphics programs to display the results. Governing equations, boundary conditions and initial values of HOTMAC and RAPTAD are discussed in Section III. Finite-difference representations of the governing equations, numerical solution procedures, and a grid system are given in Section IV.

  12. DYNA3D, INGRID, and TAURUS: an integrated, interactive software system for crashworthiness engineering

    SciTech Connect (OSTI)

    Benson, D.J.; Hallquist, J.O.; Stillman, D.W.

    1985-04-01

    Crashworthiness engineering has always been a high priority at Lawrence Livermore National Laboratory because of its role in the safe transport of radioactive material for the nuclear power industry and military. As a result, the authors have developed an integrated, interactive set of finite element programs for crashworthiness analysis. The heart of the system is DYNA3D, an explicit, fully vectorized, large deformation structural dynamics code. DYNA3D has the following four capabilities that are critical for the efficient and accurate analysis of crashes: (1) fully nonlinear solid, shell, and beam elements for representing a structure, (2) a broad range of constitutive models for representing the materials, (3) sophisticated contact algorithms for the impact interactions, and (4) a rigid body capability to represent the bodies away from the impact zones at a greatly reduced cost without sacrificing any accuracy in the momentum calculations. To generate the large and complex data files for DYNA3D, INGRID, a general purpose mesh generator, is used. It runs on everything from IBM PCs to CRAYS, and can generate 1000 nodes/minute on a PC. With its efficient hidden line algorithms and many options for specifying geometry, INGRID also doubles as a geometric modeller. TAURUS, an interactive post processor, is used to display DYNA3D output. In addition to the standard monochrome hidden line display, time history plotting, and contouring, TAURUS generates interactive color displays on 8 color video screens by plotting color bands superimposed on the mesh which indicate the value of the state variables. For higher quality color output, graphic output files may be sent to the DICOMED film recorders. We have found that color is every bit as important as hidden line removal in aiding the analyst in understanding his results. In this paper the basic methodologies of the programs are presented along with several crashworthiness calculations.

  13. QA procedures and emissions from nonstandard sources in AQUIS, a PC-based emission inventory and air permit manager

    SciTech Connect (OSTI)

    Smith, A.E.; Tschanz, J.; Monarch, M.

    1996-05-01

    The Air Quality Utility Information System (AQUIS) is a database management system that operates under dBASE IV. It runs on an IBM-compatible personal computer (PC) with MS DOS 5.0 or later, 4 megabytes of memory, and 30 megabytes of disk space. AQUIS calculates emissions for both traditional and toxic pollutants and reports emissions in user-defined formats. The system was originally designed for use at 7 facilities of the Air Force Materiel Command, and now more than 50 facilities use it. Within the last two years, the system has been used in support of Title V permit applications at Department of Defense facilities. Growth in the user community, changes and additions to reference emission factor data, and changing regulatory requirements have demanded additions and enhancements to the system. These changes have ranged from adding or updating an emission factor to restructuring databases and adding new capabilities. Quality assurance (QA) procedures have been developed to ensure that emission calculations are correct even when databases are reconfigured and major changes in calculation procedures are implemented. This paper describes these QA and updating procedures. Some user facilities include light industrial operations associated with aircraft maintenance. These facilities have operations such as fiberglass and composite layup and plating operations for which standard emission factors are not available or are inadequate. In addition, generally applied procedures such as material balances may need special treatment to work in an automated environment, for example, in the use of oils and greases and when materials such as polyurethane paints react chemically during application. Some techniques used in these situations are highlighted here. To provide a framework for the main discussions, this paper begins with a description of AQUIS.

  14. A Big Data Approach to Analyzing Market Volatility

    SciTech Connect (OSTI)

    Wu, Kesheng; Bethel, E. Wes; Gu, Ming; Leinweber, David; Ruebel, Oliver

    2013-06-05

    Understanding the microstructure of the financial market requires the processing of a vast amount of data related to individual trades, and sometimes even multiple levels of quotes. Analyzing such a large volume of data requires tremendous computing power that is not easily available to financial academics and regulators. Fortunately, public funded High Performance Computing (HPC) power is widely available at the National Laboratories in the US. In this paper we demonstrate that the HPC resource and the techniques for data-intensive sciences can be used to greatly accelerate the computation of an early warning indicator called Volume-synchronized Probability of Informed trading (VPIN). The test data used in this study contains five and a half year?s worth of trading data for about 100 most liquid futures contracts, includes about 3 billion trades, and takes 140GB as text files. By using (1) a more efficient file format for storing the trading records, (2) more effective data structures and algorithms, and (3) parallelizing the computations, we are able to explore 16,000 different ways of computing VPIN in less than 20 hours on a 32-core IBM DataPlex machine. Our test demonstrates that a modest computer is sufficient to monitor a vast number of trading activities in real-time ? an ability that could be valuable to regulators. Our test results also confirm that VPIN is a strong predictor of liquidity-induced volatility. With appropriate parameter choices, the false positive rates are about 7percent averaged over all the futures contracts in the test data set. More specifically, when VPIN values rise above a threshold (CDF > 0.99), the volatility in the subsequent time windows is higher than the average in 93percent of the cases.

  15. TEDANN: Turbine engine diagnostic artificial neural network

    SciTech Connect (OSTI)

    Kangas, L.J.; Greitzer, F.L.; Illi, O.J. Jr.

    1994-03-17

    The initial focus of TEDANN is on AGT-1500 fuel flow dynamics: that is, fuel flow faults detectable in the signals from the Electronic Control Unit`s (ECU) diagnostic connector. These voltage signals represent the status of the Electro-Mechanical Fuel System (EMFS) in response to ECU commands. The EMFS is a fuel metering device that delivers fuel to the turbine engine under the management of the ECU. The ECU is an analog computer whose fuel flow algorithm is dependent upon throttle position, ambient air and turbine inlet temperatures, and compressor and turbine speeds. Each of these variables has a representative voltage signal available at the ECU`s J1 diagnostic connector, which is accessed via the Automatic Breakout Box (ABOB). The ABOB is a firmware program capable of converting 128 separate analog data signals into digital format. The ECU`s J1 diagnostic connector provides 32 analog signals to the ABOB. The ABOB contains a 128 to 1 multiplexer and an analog-to-digital converter, CP both operated by an 8-bit embedded controller. The Army Research Laboratory (ARL) developed and published the hardware specifications as well as the micro-code for the ABOB Intel EPROM processor and the internal code for the multiplexer driver subroutine. Once the ECU analog readings are converted into a digital format, the data stream will be input directly into TEDANN via the serial RS-232 port of the Contact Test Set (CTS) computer. The CTS computer is an IBM compatible personal computer designed and constructed for tactical use on the battlefield. The CTS has a 50MHz 32-bit Intel 80486DX processor. It has a 200MB hard drive and 8MB RAM. The CTS also has serial, parallel and SCSI interface ports. The CTS will also host a frame-based expert system for diagnosing turbine engine faults (referred to as TED; not shown in Figure 1).

  16. RAMONA-4B development for SBWR safety studies

    SciTech Connect (OSTI)

    Rohatgi, U.S.; Aronson, A.L.; Cheng, H.S.; Khan, H.J.; Mallen, A.N.

    1993-12-31

    The Simplified Boiling Water Reactor (SBWR) is a revolutionary design of a boiling-water reactor. The reactor is based on passive safety systems such as natural circulation, gravity flow, pressurized gas, and condensation. SBWR has no active systems, and the flow in the vessel is by natural circulation. There is a large chimney section above the core to provide a buoyancy head for natural circulation. The reactor can be shut down by either of four systems; namely, scram, Fine Motion Control Rod Drive (FMCRD), Alternate Rod Insertion (ARI), and Standby Liquid Control System (SLCS). The safety injection is by gravity drain from the Gravity Driven Cooling System (GDCS) and Suppression Pool (SP). The heat sink is through two types of heat exchangers submerged in the tank of water. These heat exchangers are the Isolation Condenser (IC) and the Passive Containment Cooling System (PCCS). The RAMONA-4B code has been developed to simulate the normal operation, reactivity transients, and to address the instability issues for SBWR. The code has a three-dimensional neutron kinetics coupled to multiple parallel-channel thermal-hydraulics. The two-phase thermal hydraulics is based on a nonhomogeneous nonequilibrium drift-flux formulation. It employs an explicit integration to solve all state equations (except for neutron kinetics) in order to predict the instability without numerical damping. The objective of this project is to develop a Sun SPARC and IBM RISC 6000 based RAMONA-4B code for applications to SBWR safety analyses, in particular for stability and ATWS studies.

  17. Studies of acute and chronic radiation injury at the Biological and Medical Research Division, Argonne National Laboratory, 1953-1970: Description of individual studies, data files, codes, and summaries of significant findings

    SciTech Connect (OSTI)

    Grahn, D.; Fox, C.; Wright, B.J.; Carnes, B.A.

    1994-05-01

    Between 1953 and 1970, studies on the long-term effects of external x-ray and {gamma} irradiation on inbred and hybrid mouse stocks were carried out at the Biological and Medical Research Division, Argonne National Laboratory. The results of these studies, plus the mating, litter, and pre-experimental stock records, were routinely coded on IBM cards for statistical analysis and record maintenance. Also retained were the survival data from studies performed in the period 1943-1953 at the National Cancer Institute, National Institutes of Health, Bethesda, Maryland. The card-image data files have been corrected where necessary and refiled on hard disks for long-term storage and ease of accessibility. In this report, the individual studies and data files are described, and pertinent factors regarding caging, husbandry, radiation procedures, choice of animals, and other logistical details are summarized. Some of the findings are also presented. Descriptions of the different mouse stocks and hybrids are included in an appendix; more than three dozen stocks were involved in these studies. Two other appendices detail the data files in their original card-image format and the numerical codes used to describe the animal`s exit from an experiment and, for some studies, any associated pathologic findings. Tabular summaries of sample sizes, dose levels, and other variables are also given to assist investigators in their selection of data for analysis. The archive is open to any investigator with legitimate interests and a willingness to collaborate and acknowledge the source of the data and to recognize appropriate conditions or caveats.

  18. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    SciTech Connect (OSTI)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept of a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.

  19. Parallel community climate model: Description and user`s guide

    SciTech Connect (OSTI)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H.

    1996-07-15

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.

  20. Argonne National Laboratory Physics Division annual report, January--December 1996

    SciTech Connect (OSTI)

    Thayer, K.J.

    1997-08-01

    The past year has seen several of the Physics Division`s new research projects reach major milestones with first successful experiments and results: the atomic physics station in the Basic Energy Sciences Research Center at the Argonne Advanced Photon Source was used in first high-energy, high-brilliance x-ray studies in atomic and molecular physics; the Short Orbit Spectrometer in Hall C at the Thomas Jefferson National Accelerator (TJNAF) Facility that the Argonne medium energy nuclear physics group was responsible for, was used extensively in the first round of experiments at TJNAF; at ATLAS, several new beams of radioactive isotopes were developed and used in studies of nuclear physics and nuclear astrophysics; the new ECR ion source at ATLAS was completed and first commissioning tests indicate excellent performance characteristics; Quantum Monte Carlo calculations of mass-8 nuclei were performed for the first time with realistic nucleon-nucleon interactions using state-of-the-art computers, including Argonne`s massively parallel IBM SP. At the same time other future projects are well under way: preparations for the move of Gammasphere to ATLAS in September 1997 have progressed as planned. These new efforts are imbedded in, or flowing from, the vibrant ongoing research program described in some detail in this report: nuclear structure and reactions with heavy ions; measurements of reactions of astrophysical interest; studies of nucleon and sub-nucleon structures using leptonic probes at intermediate and high energies; atomic and molecular structure with high-energy x-rays. The experimental efforts are being complemented with efforts in theory, from QCD to nucleon-meson systems to structure and reactions of nuclei. Finally, the operation of ATLAS as a national users facility has achieved a new milestone, with 5,800 hours beam on target for experiments during the past fiscal year.

  1. Scalable Performance Measurement and Analysis

    SciTech Connect (OSTI)

    Gamblin, T

    2009-10-27

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  2. Adversary Sequence Interruption Model

    Energy Science and Technology Software Center (OSTI)

    1985-11-15

    PC EASI is an IBM personal computer or PC-compatible version of an analytical technique for measuring the effectiveness of physical protection systems. PC EASI utilizes a methodology called Estimate of Adversary Sequence Interruption (EASI) which evaluates the probability of interruption (PI) for a given sequence of adversary tasks. Probability of interruption is defined as the probability that the response force will arrive before the adversary force has completed its task. The EASI methodology is amore » probabilistic approach that analytically evaluates basic functions of the physical security system (detection, assessment, communications, and delay) with respect to response time along a single adversary path. It is important that the most critical scenarios for each target be identified to ensure that vulnerabilities have not been overlooked. If the facility is not overly complex, this can be accomplished by examining all paths. If the facility is complex, a global model such as Safeguards Automated Facility Evaluation (SAFE) may be used to identify the most vulnerable paths. PC EASI is menu-driven with screen forms for entering and editing the basic scenarios. In addition to evaluating PI for the basic scenario, the sensitivities of many of the parameters chosen in the scenario can be analyzed. These sensitivities provide information to aid the analyst in determining the tradeoffs for reducing the probability of interruption. PC EASI runs under the Micro Data Base Systems'' proprietary database management system Knowledgeman. KMAN provides the user environment and file management for the specified basic scenarios, and KGRAPH the graphical output of the sensitivity calculations. This software is not included. Due to errors in release 2 of KMAN, PC EASI will not execute properly; release 1.07 of KMAN is required.« less

  3. Computer simulation of coal preparation plants. Part 2. User's manual. Final report

    SciTech Connect (OSTI)

    Gottfried, B.S.; Tierney, J.W.

    1985-12-01

    This report describes a comprehensive computer program that allows the user to simulate the performance of realistic coal preparation plants. The program is very flexible in the sense that it can accommodate any particular plant configuration that may be of interest. This allows the user to compare the performance of different plant configurations and to determine the impact of various modes of operation with the same configuration. In addition, the program can be used to assess the degree of cleaning obtained with different coal feeds for a given plant configuration and a given mode of operation. Use of the simulator requires that the user specify the appearance of the plant configuration, the plant operating conditions, and a description of the coal feed. The simulator will then determine the flowrates within the plant, and a description of each flowrate (i.e., the weight distribution, percent ash, pyritic sulfur and total sulfur, moisture, and Btu content). The simulation program has been written in modular form using the Fortran language. It can be implemented on a great many different types of computers, ranging from large scientific mainframes to IBM-type personal computers with a fixed disk. Some customization may be required, however, to ensure compatibility with the features of Fortran available on a particular computer. Part I of this report contains a general description of the methods used to carry out the simulation. Each of the major types of units is described separately, in addition to a description of the overall system analysis. Part II is intended as a user's manual. It contains a listing of the mainframe version of the program, instructions for its use (on both a mainframe and a microcomputer), and output for a representative sample problem.

  4. CPS and the Fermilab farms

    SciTech Connect (OSTI)

    Fausey, M.R.

    1992-06-01

    Cooperative Processes Software (CPS) is a parallel programming toolkit developed at the Fermi National Accelerator Laboratory. It is the most recent product in an evolution of systems aimed at finding a cost-effective solution to the enormous computing requirements in experimental high energy physics. Parallel programs written with CPS are large-grained, which means that the parallelism occurs at the subroutine level, rather than at the traditional single line of code level. This fits the requirements of high energy physics applications, such as event reconstruction, or detector simulations, quite well. It also satisfies the requirements of applications in many other fields. One example is in the pharmaceutical industry. In the field of computational chemistry, the process of drug design may be accelerated with this approach. CPS programs run as a collection of processes distributed over many computers. CPS currently supports a mixture of heterogeneous UNIX-based workstations which communicate over networks with TCP/IR CPS is most suited for jobs with relatively low I/O requirements compared to CPU. The CPS toolkit supports message passing remote subroutine calls, process synchronization, bulk data transfers, and a mechanism called process queues, by which one process can find another which has reached a particular state. The CPS software supports both batch processing and computer center operations. The system is currently running in production mode on two farms of processors at Fermilab. One farm consists of approximately 90 IBM RS/6000 model 320 workstations, and the other has 85 Silicon Graphics 4D/35 workstations. This paper first briefly describes the history of parallel processing at Fermilab which lead to the development of CPS. Then the CPS software and the CPS Batch queueing system are described. Finally, the experiences of using CPS in production on the Fermilab processor farms are described.

  5. RTAP evaluation

    SciTech Connect (OSTI)

    Cupps, K.; Elko, S.; Folta, P.

    1995-01-23

    An in-depth analysis of the RTAP product was undertaken within the CNC associate program to determine the feasibility of utilizing it to replace the current Supervisory Control System that supports the AVLIS program. This document contains the results of that evaluation. With some fundamental redesign the current Supervisory Control system could meet the needs described above. The redesign would require a large amount of software rewriting and would be very time consuming. The higher level functionality (alarming, automation, etc.) would have to wait until its completion. Our current understanding and preliminary testing indicate that using commercial software is the best way to get these new features at the minimum cost to the program. Additional savings will be obtained by moving the maintenance costs of the basic control system from in-house to commercial industry and allowing our developers to concentrate on the unique control areas that require customization. Our current operating system, VMS, has become a hindrance. The UNIX operating system has become the choice for most scientific and engineering systems and we should follow suit. As a result of the commercial system survey referenced above we selected RTAP, a SCADA product developed by Hewlett Packard (HP), as the most favorable product to replace the current supervisory system in AVLIS. It is an extremely open system, with a large, well defined Application Programming Interface (API). This will allow the seamless integration of unique front end devices in the laser area (e.g. Optical Device Controller). RTAP also possesses various functionality that is lacking in our current system: integrated alarming, real-time configurable database, system scalability, and a Sequence Control Language (SQL developed by CPU, an RTAP Channel Partner) that will facilitate the automation necessary to bring the AVLIS process to plant-line operation. It runs on HP-9000, DEC-Alpha, IBM-RS6000 and Sun Workstations.

  6. PROCEEDINGS OF THE RIKEN BNL RESEARCH CENTER WORKSHOP ON LARGE SCALE COMPUTATIONS IN NUCLEAR PHYSICS USING THE QCDOC, SEPTEMBER 26 - 28, 2002.

    SciTech Connect (OSTI)

    AOKI,Y.; BALTZ,A.; CREUTZ,M.; GYULASSY,M.; OHTA,S.

    2002-09-26

    The massively parallel computer QCDOC (QCD On a Chip) of the RIKEN BNL Research Center (RI3RC) will provide ten-teraflop peak performance for lattice gauge calculations. Lattice groups from both Columbia University and RBRC, along with assistance from IBM, jointly handled the design of the QCDOC. RIKEN has provided $5 million in funding to complete the machine in 2003. Some fraction of this computer (perhaps as much as 10%) might be made available for large-scale computations in areas of theoretical nuclear physics other than lattice gauge theory. The purpose of this workshop was to investigate the feasibility and possibility of using a supercomputer such as the QCDOC for lattice, general nuclear theory, and other calculations. The lattice applications to nuclear physics that can be investigated with the QCDOC are varied: for example, the light hadron spectrum, finite temperature QCD, and kaon ({Delta}I = 1/2 and CP violation), and nucleon (the structure of the proton) matrix elements, to name a few. There are also other topics in theoretical nuclear physics that are currently limited by computer resources. Among these are ab initio calculations of nuclear structure for light nuclei (e.g. up to {approx}A = 8 nuclei), nuclear shell model calculations, nuclear hydrodynamics, heavy ion cascade and other transport calculations for RHIC, and nuclear astrophysics topics such as exploding supernovae. The physics topics were quite varied, ranging from simulations of stellar collapse by Douglas Swesty to detailed shell model calculations by David Dean, Takaharu Otsuka, and Noritaka Shimizu. Going outside traditional nuclear physics, James Davenport discussed molecular dynamics simulations and Shailesh Chandrasekharan presented a class of algorithms for simulating a wide variety of femionic problems. Four speakers addressed various aspects of theory and computational modeling for relativistic heavy ion reactions at RHIC. Scott Pratt and Steffen Bass gave general overviews of

  7. Eclipse Parallel Tools Platform

    Energy Science and Technology Software Center (OSTI)

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

  8. The Secret Life of Quarks, Final Report for the University of North Carolina at Chapel Hill

    SciTech Connect (OSTI)

    Fowler, Robert

    2012-12-10

    This final report summarizes activities and results at the University of North Carolina as part of the the SciDAC-2 Project The Secret Life of Quarks: National Computational Infrastructure for Lattice Quantum Chromodynamics. The overall objective of the project is to construct the software needed to study quantum chromo- dynamics (QCD), the theory of the strong interactions of subatomic physics, and similar strongly coupled gauge theories anticipated to be of importance in the LHC era. It built upon the successful efforts of the SciDAC-1 project National Computational Infrastructure for Lattice Gauge Theory, in which a QCD Applications Programming Interface (QCD API) was developed that enables lat- tice gauge theorists to make effective use of a wide variety of massively parallel computers. In the SciDAC-2 project, optimized versions of the QCD API were being created for the IBM Blue- Gene/L (BG/L) and BlueGene/P (BG/P), the Cray XT3/XT4 and its successors, and clusters based on multi-core processors and Infiniband communications networks. The QCD API is being used to enhance the performance of the major QCD community codes and to create new applications. Software libraries of physics tools have been expanded to contain sharable building blocks for inclusion in application codes, performance analysis and visualization tools, and software for au- tomation of physics work flow. New software tools were designed for managing the large data sets generated in lattice QCD simulations, and for sharing them through the International Lattice Data Grid consortium. As part of the overall project, researchers at UNC were funded through ASCR to work in three general areas. The main thrust has been performance instrumentation and analysis in support of the SciDAC QCD code base as it evolved and as it moved to new computation platforms. In support of the performance activities, performance data was to be collected in a database for the purpose of broader analysis. Third, the UNC

  9. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    SciTech Connect (OSTI)

    Shimojo, Fuyuki; Hattori, Shinnosuke; Department of Physics, Kumamoto University, Kumamoto 860-8555 ; Kalia, Rajiv K.; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Rajak, Pankaj; Vashishta, Priya; Kunaseth, Manaschai; National Nanotechnology Center, Pathumthani 12120 ; Ohmura, Satoshi; Department of Physics, Kumamoto University, Kumamoto 860-8555; Department of Physics, Kyoto University, Kyoto 606-8502 ; Shimamura, Kohei; Department of Physics, Kumamoto University, Kumamoto 860-8555; Department of Applied Quantum Physics and Nuclear Engineering, Kyushu University, Fukuoka 819-0395

    2014-05-14

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of

  10. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    SciTech Connect (OSTI)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C.

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms

  11. Accelerating scientific discovery : 2007 annual report.

    SciTech Connect (OSTI)

    Beckman, P.; Dave, P.; Drugan, C.

    2008-11-14

    As a gateway for scientific discovery, the Argonne Leadership Computing Facility (ALCF) works hand in hand with the world's best computational scientists to advance research in a diverse span of scientific domains, ranging from chemistry, applied mathematics, and materials science to engineering physics and life sciences. Sponsored by the U.S. Department of Energy's (DOE) Office of Science, researchers are using the IBM Blue Gene/L supercomputer at the ALCF to study and explore key scientific problems that underlie important challenges facing our society. For instance, a research team at the University of California-San Diego/ SDSC is studying the molecular basis of Parkinson's disease. The researchers plan to use the knowledge they gain to discover new drugs to treat the disease and to identify risk factors for other diseases that are equally prevalent. Likewise, scientists from Pratt & Whitney are using the Blue Gene to understand the complex processes within aircraft engines. Expanding our understanding of jet engine combustors is the secret to improved fuel efficiency and reduced emissions. Lessons learned from the scientific simulations of jet engine combustors have already led Pratt & Whitney to newer designs with unprecedented reductions in emissions, noise, and cost of ownership. ALCF staff members provide in-depth expertise and assistance to those using the Blue Gene/L and optimizing user applications. Both the Catalyst and Applications Performance Engineering and Data Analytics (APEDA) teams support the users projects. In addition to working with scientists running experiments on the Blue Gene/L, we have become a nexus for the broader global community. In partnership with the Mathematics and Computer Science Division at Argonne National Laboratory, we have created an environment where the world's most challenging computational science problems can be addressed. Our expertise in high-end scientific computing enables us to provide guidance for applications

  12. Towards Energy-Centric Computing and Computer Architecture

    SciTech Connect (OSTI)

    2011-02-09

    , IBM Faculty Partnership Awards between 2001 and 2004, and an Alfred P. Sloan Research Fellowship in 2004. He is a senior member of IEEE and ACM.

  13. A Fault Oblivious Extreme-Scale Execution Environment

    SciTech Connect (OSTI)

    McKie, Jim

    2014-11-20

    The FOX project, funded under the ASCR X-stack I program, developed systems software and runtime libraries for a new approach to the data and work distribution for massively parallel, fault oblivious application execution. Our work was motivated by the premise that exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today’s machines. To deliver the capability of exascale hardware, the systems software must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. Our OS research has prototyped new methods to provide efficient resource sharing, synchronization, and protection in a many-core compute node. We have experimented with alternative task/dataflow programming models and shown scalability in some cases to hundreds of thousands of cores. Much of our software is in active development through open source projects. Concepts from FOX are being pursued in next generation exascale operating systems. Our OS work focused on adaptive, application tailored OS services optimized for multi → many core processors. We developed a new operating system NIX that supports role-based allocation of cores to processes which was released to open source. We contributed to the IBM FusedOS project, which promoted the concept of latency-optimized and throughput-optimized cores. We built a task queue library based on distributed, fault tolerant key-value store and identified scaling issues. A second fault tolerant task parallel library was developed, based on the Linda tuple space model, that used low level interconnect primitives for optimized communication. We designed fault tolerance mechanisms for task parallel computations

  14. User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code

    SciTech Connect (OSTI)

    Earth Sciences Division; Zhang, Keni; Zhang, Keni; Wu, Yu-Shu; Pruess, Karsten

    2008-05-27

    TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator is to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used

  15. Translational Genomics for the Improvement of Switchgrass

    SciTech Connect (OSTI)

    Carpita, Nicholas; McCann, Maureen

    2014-05-07

    Our objectives were to apply bioinformatics and high throughput sequencing technologies to identify and classify the genes involved in cell wall formation in maize and switchgrass. Targets for genetic modification were to be identified and cell wall materials isolated and assayed for enhanced performance in bioprocessing. We annotated and assembled over 750 maize genes into gene families predicted to function in cell wall biogenesis. Comparative genomics of maize, rice, and Arabidopsis sequences revealed differences in gene family structure. In addition, differences in expression between gene family members of Arabidopsis, maize and rice underscored the need for a grass-specific genetic model for functional analyses. A forward screen of mature leaves of field-grown maize lines by near-infrared spectroscopy yielded several dozen lines with heritable spectroscopic phenotypes, several of which near-infrared (nir) mutants had altered carbohydrate-lignin compositions. Our contributions to the maize genome sequencing effort built on knowledge of copy number variation showing that uneven gene losses between duplicated regions were involved in returning an ancient allotetraploid to a genetically diploid state. For example, although about 25% of all duplicated genes remain genome-wide, all of the cellulose synthase (CesA) homologs were retained. We showed that guaiacyl and syringyl lignin in lignocellulosic cell-wall materials from stems demonstrate a two-fold natural variation in content across a population of maize Intermated B73 x Mo7 (IBM) recombinant inbred lines, a maize Association Panel of 282 inbreds and landraces, and three populations of the maize Nested Association Mapping (NAM) recombinant inbred lines grown in three years. We then defined quantitative trait loci (QTL) for stem lignin content measured using pyrolysis molecular-beam mass spectrometry, and glucose and xylose yield measured using an enzymatic hydrolysis assay. Among five multi-year QTL for lignin

  16. Greenhouse Gas Mitigation Options in ISEEM Global Energy Model: 2010-2050 Scenario Analysis for Least-Cost Carbon Reduction in Iron and Steel Sector

    SciTech Connect (OSTI)

    Karali, Nihan; Xu, Tengfang; Sathaye, Jayant

    2013-12-01

    The goal of the modeling work carried out in this project was to quantify long-term scenarios for the future emission reduction potentials in the iron and steel sector. The main focus of the project is to examine the impacts of carbon reduction options in the U.S. iron and steel sector under a set of selected scenarios. In order to advance the understanding of carbon emission reduction potential on the national and global scales, and to evaluate the regional impacts of potential U.S. mitigation strategies (e.g., commodity and carbon trading), we also included and examined the carbon reduction scenarios in China’s and India’s iron and steel sectors in this project. For this purpose, a new bottom-up energy modeling framework, the Industrial Sector Energy Efficiency Modeling (ISEEM), (Karali et al. 2012) was used to provide detailed annual projections starting from 2010 through 2050. We used the ISEEM modeling framework to carry out detailed analysis, on a country-by-country basis, for the U.S., China’s, and India’s iron and steel sectors. The ISEEM model applicable to iron and steel section, called ISEEM-IS, is developed to estimate and evaluate carbon emissions scenarios under several alternative mitigation options - including policies (e.g., carbon caps), commodity trading, and carbon trading. The projections will help us to better understand emission reduction potentials with technological and economic implications. The database for input of ISEEM-IS model consists of data and information compiled from various resources such as World Steel Association (WSA), the U.S. Geological Survey (USGS), China Steel Year Books, India Bureau of Mines (IBM), Energy Information Administration (EIA), and recent LBNL studies on bottom-up techno-economic analysis of energy efficiency measures in the iron and steel sector of the U.S., China, and India, including long-term steel production in China. In the ISEEM-IS model, production technology and manufacturing details are

  17. Fundamental Mechanisms Driving the Amorphous to Crystalline Phase Transformation

    SciTech Connect (OSTI)

    Reed, B W; Browning, N D; Santala, M K; LaGrange, T; Gilmer, G H; Masiel, D J; Campbell, G H; Raoux, S; Topuria, T; Meister, S; Cui, Y

    2011-01-04

    -stabilized metastable rock salt structure. Each transformation takes {approx}10-100 ns, and the cycle can be driven repeatedly a very large number of times with a nanosecond laser such as the DTEM's sample drive laser. These materials are widely used in optical storage devices such as rewritable CDs and DVDs, and they are also applied in a novel solid state memory technology - phase change memory (PCM). PCM has the potential to produce nonvolatile memory systems with high speed, extreme density, and very low power requirements. For PCM applications several materials properties are of great importance: the resistivities of both phases, the crystallization temperature, the melting point, the crystallization speed, reversibility (number of phase-transformation cycles without degradation) and stability against crystallization at elevated temperature. For a viable technology, all these properties need to have good scaling behavior, as dimensions of the memory cells will shrink with every generation. In this LDRD project, we used the unique single-shot nanosecond in situ experimentation capabilities of the DTEM to watch these transformations in GST on the time and length scales most relevant for device applications. Interpretation of the results was performed in conjunction with atomistic and finite-element computations. Samples were provided by collaborators at IBM and Stanford University. We observed, and measured the kinetics of, the amorphous-crystalline and melting-solidification transitions in uniform thin-film samples. Above a certain threshold, the crystal nucleation rate was found to be enormously high (with many nuclei appearing per cubic {micro}m even after nanosecond-scale incubation times), in agreement with atomistic simulation and consistent with an extremely low nucleation barrier. We developed data reduction techniques based on principal component analysis (PCA), revealing the complex, multi-dimensional evolution of the material while suppressing noise and irrelevant

  18. A Measurement Management Technology for Improving Energy Efficiency in Data Centers and Telecommunication Facilities

    SciTech Connect (OSTI)

    Hendrik Hamann, Levente Klein

    2012-06-28

    technologies were added to the existing MMT platform: (1) air contamination (corrosion) sensors, (2) power monitoring, and (3) a wireless environmental sensing network. All three technologies are built on cost effective sensing solutions that increase the density of sensing points and enable high resolution mapping of DCs. The wireless sensing solution enables Air Conditioning Unit (ACU) control while the corrosion sensor enables air side economization and can quantify the risk of IT equipment failure due to air contamination. Validation data for six test sites demonstrate that leveraging MMT energy efficiency solutions combined with industry best practices results in an average of 20% reduction in cooling energy, without major infrastructure upgrades. As an illustration of the unique MMT capabilities, a data center infrastructure efficiency (DCIE) of 87% (industry best operation) was achieved. The technology is commercialized through IBM System and Technology Lab Services that offers MMT as a solution to improve DC energy efficiency. Estimation indicates that deploying MMT in existing DCs can results in an 8 billion kWh savings and projection indicates that constant adoption of MMT can results in obtainable savings of 44 billion kWh in 2035. Negotiations are under way with business partners to commercialize/license the ACU control technology and the new sensor solutions (corrosion and power sensing) to enable third party vendors and developers to leverage the energy efficiency solutions.

  19. Computation Directorate and Science& Technology Review Computational Science and Research Featured in 2002

    SciTech Connect (OSTI)

    Alchorn, A L

    2003-04-04

    Thank you for your interest in the activities of the Lawrence Livermore National Laboratory Computation Directorate. This collection of articles from the Laboratory's Science & Technology Review highlights the most significant computational projects, achievements, and contributions during 2002. In 2002, LLNL marked the 50th anniversary of its founding. Scientific advancement in support of our national security mission has always been the core of the Laboratory. So that researchers could better under and predict complex physical phenomena, the Laboratory has pushed the limits of the largest, fastest, most powerful computers in the world. In the late 1950's, Edward Teller--one of the LLNL founders--proposed that the Laboratory commission a Livermore Advanced Research Computer (LARC) built to Livermore's specifications. He tells the story of being in Washington, DC, when John Von Neumann asked to talk about the LARC. He thought Teller wanted too much memory in the machine. (The specifications called for 20-30,000 words.) Teller was too smart to argue with him. Later Teller invited Von Neumann to the Laboratory and showed him one of the design codes being prepared for the LARC. He asked Von Neumann for suggestions on fitting the code into 10,000 words of memory, and flattered him about ''Labbies'' not being smart enough to figure it out. Von Neumann dropped his objections, and the LARC arrived with 30,000 words of memory. Memory, and how close memory is to the processor, is still of interest to us today. Livermore's first supercomputer was the Remington-Rand Univac-1. It had 5600 vacuum tubes and was 2 meters wide by 4 meters long. This machine was commonly referred to as a 1 KFlop machine [E+3]. Skip ahead 50 years. The ASCI White machine at the Laboratory today, produced by IBM, is rated at a peak performance of 12.3 TFlops or E+13. We've improved computer processing power by 10 orders of magnitude in 50 years, and I do not believe there's any reason to think we won

  20. Recovery Act: Integrated DC-DC Conversion for Energy-Efficient Multicore Processors

    SciTech Connect (OSTI)

    Shepard, Kenneth L

    2013-03-31

    In this project, we have developed the use of thin-film magnetic materials to improve in energy efficiency of digital computing applications by enabling integrated dc-dc power conversion and management with on-chip power inductors. Integrated voltage regulators also enables fine-grained power management, by providing dynamic scaling of the supply voltage in concert with the clock frequency of synchronous logic to throttle power consumption at periods of low computational demand. The voltage converter generates lower output voltages during periods of low computational performance requirements and higher output voltages during periods of high computational performance requirements. Implementation of integrated power conversion requires high-capacity energy storage devices, which are generally not available in traditional semiconductor processes. We achieve this with integration of thin-film magnetic materials into a conventional complementary metal-oxide-semiconductor (CMOS) process for high-quality on-chip power inductors. This project includes a body of work conducted to develop integrated switch-mode voltage regulators with thin-film magnetic power inductors. Soft-magnetic materials and inductor topologies are selected and optimized, with intent to maximize efficiency and current density of the integrated regulators. A custom integrated circuit (IC) is designed and fabricated in 45-nm CMOS silicon-on-insulator (SOI) to provide the control system and power-train necessary to drive the power inductors, in addition to providing a digital load for the converter. A silicon interposer is designed and fabricated in collaboration with IBM Research to integrate custom power inductors by chip stacking with the 45-nm CMOS integrated circuit, enabling power conversion with current density greater than 10A/mm2. The concepts and designs developed from this work enable significant improvements in performance-per-watt of future microprocessors in servers, desktops, and mobile

  1. The CO-OP Guide

    SciTech Connect (OSTI)

    Michael, J.; /Fermilab

    1991-08-16

    ring. Inside the booster ring are the East and West Booster towers, which contain cryogenic support groups. The D0 cryo group offices used to be in the West Booster Portakamps. Away from Wilson Hall, there are various buildings strewn about the Fermilab property that have important functional uses to D0. One such example is Lab A. This is where the now unused bubble chamber resides. which was used to take pictures of particle motion. Many of our group is from the bubble chamber, and occasionally stories from the 'bubble chamber days' can be heard as someone waxes nostalgic. Lab A has a machine shop and many technicians. All three of the cryostats used in the D0 experiment went through Lab A for preparation and installation work. Lab A is located directly up the road from the front of Wilson Hall (north-east). Its unmistakable dark geodesic dome makes it easy to find. The Feynman Computer building, located east and just a little bit north of Wilson Hall, houses the computer repair people. If any of the computers used in our group crash and burn, we must take them to the third floor of Feynman to be fixed or exchanged. On one side is the Prep department, which handles the VAX mainframe computers, and on the other is personal computer repair, which handles Fermi Macs and IBMs. Directly north of Wilson Hall is Site 38. This site is the location of many important Fermilab facilities, such as the Fermi fire department, the carpenter's shop, the Fermi gas pumps, the main stock room, and shipping and receiving. Lastly, but perhaps most significantly, is the Fermilab Village. In addition to the machine shops, the cut shop, welding facilities, and the garishly painted physicist dorms, there are such things as a gym, a pool and other facilities to take the edge off a weary mind. The village is located just north off Batavia road on the east side of Fermilab. The village barn is the first and most notable building as one approaches.

  2. A Novel Coarsening Method for Scalable and Efficient Mesh Generation

    SciTech Connect (OSTI)

    Yoo, A; Hysom, D; Gunney, B

    2010-12-02

    matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size

  3. Final Report: Phase II Nevada Water Resources Data, Modeling, and Visualization (DMV) Center

    SciTech Connect (OSTI)

    Jackman, Thomas; Minor, Timothy; Pohll, Gregory

    2013-07-22

    Phase I, in which the hydrologic framework was investigated and the development initiated. Phase II concentrates on practical implementation of the earlier work but emphasizes applications to the hydrology of the Lake Tahoe basin. Phase 1 efforts have been refined and extended by creating a toolset for geographic information systems (GIS) that is usable for disparate types of geospatial and geo-referenced data. The toolset is intended to serve multiple users for a variety of applications. The web portal for internet access to hydrologic and remotely sensed product data, prototyped in Phase I, has been significantly enhanced. The portal provides high performance access to LANDSAT-derived data using techniques developed during the course of the project. The portal is interactive, and supports the geo-referenced display of hydrologic information derived from remotely sensed data, such as various vegetative indices used to calculate water consumption. The platform can serve both internal and external constituencies using inter-operating infrastructure that spans both sides of the DRI firewall. The platform is intended grow its supported data assets and to serve as a template for replication to other geographic areas. An unanticipated development during the project was the use of ArcGIS software on a new computer system, called the IBM PureSytems, and the parallel use of the systems for faster, more efficient image processing. Additional data, independent of the portal, was collected within the Sagehen basin and provides detailed information regarding the processes that control hydrologic responses within mountain watersheds. The newly collected data include elevation, evapotranspiration, energy balance and remotely sensed snow-pack data. A Lake Tahoe basin hydrologic model has been developed, in part to help predict the hydrologic impacts of climate change. The model couples both the surface and subsurface hydrology, with the two components having been independently