National Library of Energy BETA

Sample records for guido bartels ibm

  1. Guido DeHoratiis

    Broader source: Energy.gov [DOE]

    Guido DeHoratiis is the Associate Deputy Assistant Secretary, Office of Oil and Natural Gas, in the Department of Energy's Office of Fossil Energy.  In this position, he is responsible for...

  2. IBM era: 1960-64

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM era: 1960-64 IBM era: 1960-64 To meet the growing computing needs of the nuclear weapons program, the Laboratory jointly developed with IBM the Stretch, IBM's first transistorized computer. July 10, 2015 trinity to trinity feature image Stretch, IBM's first transistorized computer. "Highly accurate 3D computing is a Holy Grail of the Stockpile Stewardship Program's supercomputing efforts. As the weapons age, 3D features tend to be introduced that require highly accurate 3D modeling to

  3. IBM References | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Feedback Form IBM References Contents IBM Redbooks A2 Processor Manual QPX Vector Instruction Set Architecture XL Compiler Documentation MASS Documentation Back to top IBM...

  4. IBM Presentation Template Full Version

    U.S. Energy Information Administration (EIA) Indexed Site

    0 IBM Corporation Smart Grid: Impacts on Electric Power Supply and Demand 2010 Energy Conference: Short-Term Stresses, Long-Term Change Michael Valocchi, Global Energy and Utilities Industry Leader, IBM Global Business Services April, 2010 © 2010 IBM Corporation 2 Discussion Topics The Business Model will Evolve The Consumer Value Model will Transform A New Energy Consumer will Emerge Customers Segmentation will be Done in a Different Manner Information and Data Sources will

  5. V-178: IBM Data Studio Web Console Java Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    IBM Data Studio Web Console uses the IBM Java Runtime Environment (JRE) and might be affected by vulnerabilities in the IBM JRE

  6. V-132: IBM Tivoli System Automation Application Manager Multiple...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    V-132: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities April 12, ... T-694: IBM Tivoli Federated Identity Manager Products Multiple Vulnerabilities V-145: IBM ...

  7. T-686: IBM Tivoli Integrated Portal Java Double Literal Denial...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    this November 2011 IBM Downloads Addthis Related Articles V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities T-694: IBM Tivoli Federated Identity...

  8. V-161: IBM Maximo Asset Management Products Java Multiple Vulnerabilit...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Articles U-179: IBM Java 7 Multiple Vulnerabilities V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities V-094: IBM Multiple Products Multiple...

  9. International Business Machines Corp IBM | Open Energy Information

    Open Energy Info (EERE)

    Business Machines Corp IBM Jump to: navigation, search Name: International Business Machines Corp (IBM) Place: Armonk, New York Zip: 10504 Sector: Services Product: IBM is a...

  10. Electricity Advisory Committee

    Energy Savers [EERE]

    June 5, 2012 Electricity Advisory Committee 2012 Membership Roster Richard Cowart Regulatory Assistance Project CHAIR Irwin Popowsky Pennsylvania Consumer Advocate VICE CHAIR William Ball Southern Company Guido Bartels IBM Rick Bowen Alcoa Merwin Brown California Institute for Energy and Environment Ralph Cavanagh Natural Resources Defense Council The Honorable Paul Centolella Public Utilities Commission of Ohio David Crane NRG Energy, Inc. The Honorable Robert Curry New York State Public

  11. U-181: IBM WebSphere Application Server Information Disclosure...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    A vulnerability has been reported in IBM WebSphere Application Server. PLATFORM: IBM WebSphere Application Server 6.1.x IBM WebSphere Application Server 7.0.x IBM WebSphere ...

  12. IBM's New Flat Panel Displays

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    by J. Stöhr (SSRL), M. Samant (IBM), J. Lüning (SSRL) Today's laptop computers utilize flat panel displays where the light transmission from the back to the front of the display is modulated by orientation changes in liquid crystal (LC) molecules. Details are discussed in Ref. 2 below. One of the key steps in the manufacture of the displays is the alignment of the LC molecules in the display. Today this is done by mechanical rubbing of two polymer surfaces and then sandwiching the LC between

  13. IBM Probes Material Capabilities at the ALS

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM Probes Material Capabilities at the ALS IBM Probes Material Capabilities at the ALS Print Wednesday, 12 February 2014 11:05 Vanadium dioxide, one of the few known materials that acts like an insulator at low temperatures but like a metal at warmer temperatures, is a somewhat futuristic material that could yield faster and much more energy-efficient electronic devices. Researchers from IBM's forward-thinking Spintronic Science and Applications Center (SpinAps) recently used the ALS to gain

  14. IBM Probes Material Capabilities at the ALS

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Researchers from IBM's forward-thinking Spintronic Science and Applications Center (SpinAps) recently used the ALS to gain greater insight into vanadium dioxide's unusual phase ...

  15. Integrated Building Management System (IBMS)

    SciTech Connect (OSTI)

    Anita Lewis

    2012-07-01

    This project provides a combination of software and services that more easily and cost-effectively help to achieve optimized building performance and energy efficiency. Featuring an open-platform, cloud- hosted application suite and an intuitive user experience, this solution simplifies a traditionally very complex process by collecting data from disparate building systems and creating a single, integrated view of building and system performance. The Fault Detection and Diagnostics algorithms developed within the IBMS have been designed and tested as an integrated component of the control algorithms running the equipment being monitored. The algorithms identify the normal control behaviors of the equipment without interfering with the equipment control sequences. The algorithms also work without interfering with any cooperative control sequences operating between different pieces of equipment or building systems. In this manner the FDD algorithms create an integrated building management system.

  16. V-119: IBM Security AppScan Enterprise Multiple Vulnerabilities...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    9: IBM Security AppScan Enterprise Multiple Vulnerabilities V-119: IBM Security AppScan Enterprise Multiple Vulnerabilities March 26, 2013 - 12:56am Addthis PROBLEM: IBM Security...

  17. V-094: IBM Multiple Products Multiple Vulnerabilities | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy 94: IBM Multiple Products Multiple Vulnerabilities V-094: IBM Multiple Products Multiple Vulnerabilities February 19, 2013 - 1:41am Addthis PROBLEM: IBM Multiple Products Multiple Vulnerabilities PLATFORM: IBM Maximo Asset Management versions 7.5, 7.1, and 6.2 IBM Maximo Asset Management Essentials versions 7.5, 7.1, and 6.2 IBM SmartCloud Control Desk version 7.5 IBM Tivoli Asset Management for IT versions 7.2, 7.1, and 6.2 IBM Tivoli Change and Configuration Management Database

  18. V-145: IBM Tivoli Federated Identity Manager Products Java Multiple...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities April ... Addthis Related Articles V-178: IBM Data Studio Web Console Java Multiple Vulnerabilities ...

  19. U-116: IBM Tivoli Provisioning Manager Express for Software Distributi...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    for the affected ActiveX control Addthis Related Articles V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities V-094: IBM Multiple Products Multiple...

  20. V-122: IBM Tivoli Application Dependency Discovery Manager Java...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Automation Application Manager Multiple Vulnerabilities V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities T-694: IBM Tivoli Federated Identity...

  1. V-205: IBM Tivoli System Automation for Multiplatforms Java Multiple...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Automation Application Manager Multiple Vulnerabilities V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities V-122: IBM Tivoli Application...

  2. IBM Probes Material Capabilities at the ALS

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    0 IBM Corporation Smart Grid: Impacts on Electric Power Supply and Demand 2010 Energy Conference: Short-Term Stresses, Long-Term Change Michael Valocchi, Global Energy and Utilities Industry Leader, IBM Global Business Services April, 2010 © 2010 IBM Corporation 2 Discussion Topics The Business Model will Evolve The Consumer Value Model will Transform A New Energy Consumer will Emerge Customers Segmentation will be Done in a Different Manner Information and Data Sources will

  3. August 15, 2001: IBM ASCI White | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    5, 2001: IBM ASCI White August 15, 2001: IBM ASCI White August 15, 2001: IBM ASCI White August 15, 2001 Lawrence Livermore National Laboratory dedicates the "world's fastest supercomputer," the IBM ASCI White supercomputer with 8,192 processors that perform 12.3 trillion operations per second.

  4. V-132: IBM Tivoli System Automation Application Manager Multiple

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Vulnerabilities | Department of Energy 2: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities V-132: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities April 12, 2013 - 6:00am Addthis PROBLEM: IBM has acknowledged multiple vulnerabilities in IBM Tivoli System Automation Application Manager PLATFORM: The vulnerabilities are reported in IBM Tivoli System Automation Application Manager versions 3.1, 3.2, 3.2.1, and 3.2.2 ABSTRACT: Multiple security

  5. V-180: IBM Application Manager For Smart Business Multiple Vulnerabilities

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    | Department of Energy 0: IBM Application Manager For Smart Business Multiple Vulnerabilities V-180: IBM Application Manager For Smart Business Multiple Vulnerabilities June 18, 2013 - 12:38am Addthis PROBLEM: IBM Application Manager For Smart Business Multiple Vulnerabilities PLATFORM: IBM Application Manager For Smart Business 1.x ABSTRACT: A security issue and multiple vulnerabilities have been reported in IBM Application Manager For Smart Business REFERENCE LINKS: Security Bulletin

  6. U-181: IBM WebSphere Application Server Information Disclosure

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Vulnerability | Department of Energy 81: IBM WebSphere Application Server Information Disclosure Vulnerability U-181: IBM WebSphere Application Server Information Disclosure Vulnerability June 1, 2012 - 7:00am Addthis PROBLEM: A vulnerability has been reported in IBM WebSphere Application Server. PLATFORM: IBM WebSphere Application Server 6.1.x IBM WebSphere Application Server 7.0.x IBM WebSphere Application Server 8.0.x ABSTRACT: The vulnerability is caused due to missing access controls in

  7. U-198: IBM Lotus Expeditor Multiple Vulnerabilities | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    8: IBM Lotus Expeditor Multiple Vulnerabilities U-198: IBM Lotus Expeditor Multiple Vulnerabilities June 25, 2012 - 7:00am Addthis PROBLEM: Multiple vulnerabilities have been reported in IBM Lotus Expeditor. PLATFORM: IBM Lotus Expeditor 6.x ABSTRACT: The vulnerabilities can be exploited by malicious people to conduct cross-site scripting attacks, disclose potentially sensitive information, bypass certain security restrictions, and compromise a user's system.. Reference Links: Vendor Advisory

  8. V-145: IBM Tivoli Federated Identity Manager Products Java Multiple

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Vulnerabilities | Department of Energy 45: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities April 30, 2013 - 12:09am Addthis PROBLEM: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities PLATFORM: IBM Tivoli Federated Identity Manager versions 6.1, 6.2.0, 6.2.1, and 6.2.2. IBM Tivoli Federated Identity Manager Business Gateway versions 6.1.1, 6.2.0, 6.2.1

  9. The Easy Way of Finding Parameters in IBM (EWofFP-IBM)

    SciTech Connect (OSTI)

    Turkan, Nureddin [Bozok University, Faculty of Arts and Science, Department of Physics, Divanh Yolu, 66200 Yozgat (Turkey)

    2008-11-11

    E2/M1 multipole mixing ratios of even-even nuclei in transitional region can be calculated as soon as B(E2) and B(M1) values by using the PHINT and/or NP-BOS codes. The correct calculations of energies must be obtained to produce such calculations. Also, the correct parameter values are needed to calculate the energies. The logic of the codes is based on the mathematical and physical Statements describing interacting boson model (IBM) which is one of the model of nuclear structure physics. Here, the big problem is to find the best fitted parameters values of the model. So, by using the Easy Way of Finding Parameters in IBM (EWofFP-IBM), the best parameter values of IBM Hamiltonian for {sup 102-110}Pd and {sup 102-110}Ru isotopes were firstly obtained and then the energies were calculated. At the end, it was seen that the calculated results are in good agreement with the experimental ones. In addition, it was carried out that the presented energy values obtained by using the EWofFP-IBM are dominantly better than the previous theoretical data.

  10. T-681:IBM Lotus Symphony Multiple Unspecified Vulnerabilities

    Broader source: Energy.gov [DOE]

    Multiple unspecified vulnerabilities in IBM Lotus Symphony 3 before FP3 have unknown impact and attack vectors, related to "critical security vulnerability issues."

  11. V-229: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    V-229: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site Scripting Attacks August ... Addthis Related Articles V-211: IBM iNotes Multiple Vulnerabilities U-198: IBM Lotus ...

  12. V-054: IBM WebSphere Application Server for z/OS Arbitrary Command Execution Vulnerability

    Broader source: Energy.gov [DOE]

    A vulnerability was reported in the IBM HTTP Server component 5.3 in IBM WebSphere Application Server (WAS) for z/OS

  13. V-230: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    0: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting Vulnerabilities V-230: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting Vulnerabilities August 29, ...

  14. Survivability enhancement study for C/sup 3/I/BM (communications...

    Office of Scientific and Technical Information (OSTI)

    Survivability enhancement study for Csup 3IBM (communications, command, control and ... Title: Survivability enhancement study for Csup 3IBM (communications, command, control ...

  15. New ALS Technique Guides IBM in Next-Generation Semiconductor...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    chip, which then form transistors," says Jed Pitera, a research staff member in science and technology at IBM Research-Almaden. "But it's also really hard to do the...

  16. V-074: IBM Informix Genero libpng Integer Overflow Vulnerability...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    or display a malicious PNG file, your IBM Informix Genero application might crash, or could be caused to run malicious code with the privileges of the user running the application. ...

  17. Generalized Information Architecture for Managing Requirements in IBM?s Rational DOORS(r) Application.

    SciTech Connect (OSTI)

    Aragon, Kathryn M.; Eaton, Shelley M.; McCornack, Marjorie T.; Shannon, Sharon A.

    2014-12-01

    When a requirements engineering effort fails to meet expectations, often times the requirements management tool is blamed. Working with numerous project teams at Sandia National Laboratories over the last fifteen years has shown us that the tool is rarely the culprit; usually it is the lack of a viable information architecture with well- designed processes to support requirements engineering. This document illustrates design concepts with rationale, as well as a proven information architecture to structure and manage information in support of requirements engineering activities for any size or type of project. This generalized information architecture is specific to IBM's Rational DOORS (Dynamic Object Oriented Requirements System) software application, which is the requirements management tool in Sandia's CEE (Common Engineering Environment). This generalized information architecture can be used as presented or as a foundation for designing a tailored information architecture for project-specific needs. It may also be tailored for another software tool. Version 1.0 4 November 201

  18. V-211: IBM iNotes Multiple Vulnerabilities | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    1: IBM iNotes Multiple Vulnerabilities V-211: IBM iNotes Multiple Vulnerabilities August 5, 2013 - 6:00am Addthis PROBLEM: Multiple vulnerabilities have been reported in IBM Lotus iNotes PLATFORM: IBM iNotes 9.x ABSTRACT: IBM iNotes has two cross-site scripting vulnerabilities and an ActiveX Integer overflow vulnerability REFERENCE LINKS: Secunia Advisory SA54436 IBM Security Bulletin 1645503 CVE-2013-3027 CVE-2013-3032 CVE-2013-3990 IMPACT ASSESSMENT: High DISCUSSION: 1) Certain input related

  19. V-229: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site Scripting

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Attacks | Department of Energy 9: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site Scripting Attacks V-229: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site Scripting Attacks August 28, 2013 - 6:00am Addthis PROBLEM: Several vulnerabilities were reported in IBM Lotus iNotes PLATFORM: IBM Lotus iNotes 8.5.x ABSTRACT: IBM Lotus iNotes 8.5.x contains four cross-site scripting vulnerabilities REFERENCE LINKS: Security Tracker Alert ID 1028954 IBM Security Bulletin 1647740

  20. U-111: IBM AIX ICMP Processing Flaw Lets Remote Users Deny Service...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    aixefixessecurityicmpfix.tar Addthis Related Articles U-096: IBM AIX TCP Large Send Offload Bug Lets Remote Users Deny Service V-031: IBM WebSphere DataPower...

  1. V-147: IBM Lotus Notes Mail Client Lets Remote Users Execute...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    7: IBM Lotus Notes Mail Client Lets Remote Users Execute Java Applets V-147: IBM Lotus Notes Mail Client Lets Remote Users Execute Java Applets May 2, 2013 - 6:00am Addthis...

  2. U-114: IBM Personal Communications WS File Processing Buffer Overflow Vulnerability

    Broader source: Energy.gov [DOE]

    A vulnerability in WorkStation files (.ws) by IBM Personal Communications could allow a remote attacker to cause a denial of service (application crash) or potentially execute arbitrary code on vulnerable installations of IBM Personal Communications.

  3. U-154: IBM Rational ClearQuest ActiveX Control Buffer Overflow...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    54: IBM Rational ClearQuest ActiveX Control Buffer Overflow Vulnerability U-154: IBM Rational ClearQuest ActiveX Control Buffer Overflow Vulnerability April 24, 2012 - 7:00am ...

  4. U.S. Department of Energy and IBM to Collaborate in Advancing

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Supercomputing Technology | Department of Energy IBM to Collaborate in Advancing Supercomputing Technology U.S. Department of Energy and IBM to Collaborate in Advancing Supercomputing Technology November 15, 2006 - 9:25am Addthis Lawrence Livermore and Argonne National Lab Scientists to Work with IBM Designers WASHINGTON, DC -- The U.S. Department of Energy (DOE) announced today that its Office of Science, the National Nuclear Security Administration (NNSA) and IBM will share the cost of a

  5. V-230: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Vulnerabilities | Department of Energy 0: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting Vulnerabilities V-230: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting Vulnerabilities August 29, 2013 - 4:10am Addthis PROBLEM: Multiple vulnerabilities have been reported in IBM TRIRIGA Application Platform, which can be exploited by malicious people to conduct cross-site scripting attacks. PLATFORM: IBM TRIRIGA Application Platform 2.x ABSTRACT: The vulnerabilities are

  6. U-049: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Commands on the Target System | Department of Energy 49: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the Target System U-049: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the Target System December 1, 2011 - 9:00am Addthis PROBLEM: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the Target System. PLATFORM: IBM Tivoli Netcool Reporter prior to 2.2.0.8 ABSTRACT: A vulnerability was reported in IBM Tivoli Netcool

  7. International Border Management Systems (IBMS) Program : visions and strategies.

    SciTech Connect (OSTI)

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  8. EZVIDEO, FORTRAN graphics routines for the IBM AT

    SciTech Connect (OSTI)

    Patterson, M.R.; Holdeman, J.T.; Ward, R.C.; Jackson, W.L.

    1989-10-01

    A set of IBM PC-based FORTRAN plotting routines called EZVIDEO is described in this report. These routines are written in FORTRAN and can be called from FORTRAN programs. EZVIDEO simulates a subset of the well-known DISSPLA graphics calls and makes plots directly on the IBM AT display screen. Screen dumps can also be made to an attached LaserJet or Epson printer to make hard copy without using terminal emulators. More than forty DISSPLA calls are simulated by the EZVIDEO routines. Typical screen plots require about 10 seconds (s), and good hard copy of the screen image on a laser printer requires less than 2 minutes (min). This higher-resolution hard copy is adequate for most purposes because of the enhanced resolution of the screen in the EGA and VGA modes. These EZVIDEO routines give the IB, AT user a stand-alone capability to make useful scientific or engineering plots directly on the AT, using data generated in FORTRAN programs. The routines will also work on the IBM PC or XT in CGA mode, but they require more time and yield less resolution. 7 refs., 4 figs.

  9. U-154: IBM Rational ClearQuest ActiveX Control Buffer Overflow

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Vulnerability | Department of Energy 54: IBM Rational ClearQuest ActiveX Control Buffer Overflow Vulnerability U-154: IBM Rational ClearQuest ActiveX Control Buffer Overflow Vulnerability April 24, 2012 - 7:00am Addthis PROBLEM: IBM Rational ClearQuest ActiveX Control Buffer Overflow Vulnerability PLATFORM: Versions 7.1.1 through 7.1.2.5, 8.0, and 8.0.0.1. ABSTRACT: A vulnerability was reported in IBM Rational ClearQuest. A remote user can cause arbitrary code to be executed on the target

  10. U-186: IBM WebSphere Sensor Events Multiple Vulnerabilities | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy 86: IBM WebSphere Sensor Events Multiple Vulnerabilities U-186: IBM WebSphere Sensor Events Multiple Vulnerabilities June 8, 2012 - 7:00am Addthis PROBLEM: Multiple vulnerabilities have been reported in IBM WebSphere Sensor Events PLATFORM: IBM WebSphere Sensor Events 7.x ABSTRACT: Some vulnerabilites have unknown impacts and others can be exploited by malicious people to conduct cross-site scripting attacks. Reference Links: Secunia ID 49413 No CVE references. Vendor URL IMPACT

  11. WA_01_018_IBM_Waiver_of_Governement_US_and_Foreign_Patent_Ri.pdf |

    Energy Savers [EERE]

    Department of Energy 1_018_IBM_Waiver_of_Governement_US_and_Foreign_Patent_Ri.pdf WA_01_018_IBM_Waiver_of_Governement_US_and_Foreign_Patent_Ri.pdf PDF icon WA_01_018_IBM_Waiver_of_Governement_US_and_Foreign_Patent_Ri.pdf More Documents & Publications WA_04_053_IBM_CORP_Waiver_of_the_Government_U.S._and_Foreign.pdf WA_00_015_COMPAQ_FEDERAL_LLC_Waiver_Domestic_and_Foreign_Pat.pdf Advance Patent Waiver W(A)2002-023

  12. V-122: IBM Tivoli Application Dependency Discovery Manager Java Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    Multiple security vulnerabilities exist in the Java Runtime Environments (JREs) that can affect the security of IBM Tivoli Application Dependency Discovery Manager

  13. T-594: IBM solidDB Password Hash Authentication Bypass Vulnerability

    Broader source: Energy.gov [DOE]

    This vulnerability could allow remote attackers to execute arbitrary code on vulnerable installations of IBM solidDB. Authentication is not required to exploit this vulnerability.

  14. T-561: IBM and Oracle Java Binary Floating-Point Number Conversion Denial of Service Vulnerability

    Broader source: Energy.gov [DOE]

    IBM and Oracle Java products contain a vulnerability that could allow an unauthenticated, remote attacker to cause a denial of service (DoS) condition on a targeted system.

  15. How Would IBM's Quiz-Show Computer, Watson, Do as a Competitor...

    Office of Science (SC) Website

    How Would IBM's Quiz-Show Computer, Watson, Do as a Competitor in the National Science Bowl? News News Home Featured Articles 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 ...

  16. U-116: IBM Tivoli Provisioning Manager Express for Software Distribution Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    Multiple vulnerabilities have been reported in IBM Tivoli Provisioning Manager Express for Software Distribution, which can be exploited by malicious people to conduct SQL injection attacks and compromise a user's system

  17. U.S. Department of Energy and IBM to Collaborate in Advancing...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    IBM to Collaborate in Advancing Supercomputing Technology U.S. Department of Energy and ... WASHINGTON, DC -- The U.S. Department of Energy (DOE) announced today that its Office of ...

  18. New ALS Technique Guides IBM in Next-Generation Semiconductor Development

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    New ALS Technique Guides IBM in Next-Generation Semiconductor Development New ALS Technique Guides IBM in Next-Generation Semiconductor Development Print Wednesday, 21 January 2015 09:37 A new measurement technique developed at the ALS is helping guide the semiconductor industry in next-generation nanopatterning techniques. Directed self assembly (DSA) of block copolymers is an extremely promising strategy for high-volume, cost-effective semiconductor manufacturing at the nanoscale. Materials

  19. U-007: IBM Rational AppScan Import/Load Function Flaws Let Remote Users Execute Arbitrary Code

    Broader source: Energy.gov [DOE]

    Two vulnerabilities were reported in IBM Rational AppScan. A remote user can cause arbitrary code to be executed on the target user's system.

  20. T-615: IBM Rational System Architect ActiveBar ActiveX Control Lets Remote Users Execute Arbitrary Code

    Broader source: Energy.gov [DOE]

    There is a high risk security vulnerability with the ActiveBar ActiveX controls used by IBM Rational System Architect.

  1. Shape coexistence in the neutron-deficient Pt isotopes in the configuration-mixed IBM

    SciTech Connect (OSTI)

    Vargas, Carlos E.; Campuzano, Cuauhtemoc; Morales, Irving O.; Frank, Alejandro; Van Isacker, Piet

    2008-05-12

    The matrix-coherent state approach in the IBM with configuration mixing is used to describe the geometry of neutron-deficient Pt isotopes. Employing a parameter set for all isotopes determined previously, it is found that the lowest minimum goes from spherical to oblate and finally acquires a prolate shape when approaching the mid-shell Pt isotopes.

  2. Computing Legacy Software Behavior to Understand Functionality and Security Properties: An IBM/370 Demonstration

    SciTech Connect (OSTI)

    Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J; Sayre, Kirk D; Ankrum, Scott

    2013-01-01

    Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and security vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.

  3. Shape coexistence in the neutron-deficient Pt isotopes in a configuration mixing IBM

    SciTech Connect (OSTI)

    Morales, Irving O.; Vargas, Carlos E.; Frank, Alejandro

    2004-09-13

    The recently proposed matrix-coherent state approach for configuration mixing IBM is used to describe the evolving geometry of the neutron deficient Pt isotopes. It is found that the Potential Energy Surface (PES) of the Platinum isotopes evolves, when the number of neutrons decreases, from spherical to oblate and then to prolate shapes, in agreement with experimental measurements. Oblate-Prolate shape coexistence is observed in 194,192Pt isotopes.

  4. New ALS Technique Guides IBM in Next-Generation Semiconductor Development

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    New ALS Technique Guides IBM in Next-Generation Semiconductor Development Print A new measurement technique developed at the ALS is helping guide the semiconductor industry in next-generation nanopatterning techniques. Directed self assembly (DSA) of block copolymers is an extremely promising strategy for high-volume, cost-effective semiconductor manufacturing at the nanoscale. Materials that self-assemble spontaneously form nanostructures down to the molecular scale, which would revolutionize

  5. Studies of phase transitions and quantum chaos relationships in extended Casten triangle of IBM-1

    SciTech Connect (OSTI)

    Proskurins, J.; Andrejevs, A.; Krasta, T.; Tambergs, J. [University of Latvia, Institute of Solid State Physics (Latvia)], E-mail: juris_tambergs@yahoo.com

    2006-07-15

    A precise solution of the classical energy functional E(N, {eta}, {chi}; {beta}) minimum problem with respect to deformation parameter {beta} is obtained for the simplified Casten version of the standard interacting boson model (IBM-1) Hamiltonian. The first-order phase transition lines as well as the critical points of X(5), -X(5), and E(5) symmetries are considered. The dynamical criteria of quantum chaos-the basis state fragmentation width and the wave function entropy - are studied for the ({eta}, {chi}) parameter space of the extended Casten triangle, and the possible relationships between these criteria and phase transition lines are discussed.

  6. Statement by Guido DeHoratiis

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    will only address a subset of unconventional resources: shale gas, tight gas, shale oil, and tight oil, and a robust Federal research and development (R&D) plan is...

  7. Additive synthesis with DIASS-M4C on Argonne National Laboratory`s IBM POWERparallel System (SP)

    SciTech Connect (OSTI)

    Kaper, H.; Ralley, D.; Restrepo, J.; Tiepei, S.

    1995-12-31

    DIASS-M4C, a digital additive instrument was implemented on the Argonne National Laboratory`s IBM POWER parallel System (SP). This paper discusses the need for a massively parallel supercomputer and shows how the code was parallelized. The resulting sounds and the degree of control the user can have justify the effort and the use of such a large computer.

  8. T-722: IBM WebSphere Commerce Edition Input Validation Holes Permit Cross-Site Scripting Attacks

    Broader source: Energy.gov [DOE]

    A remote user can access the target user's cookies (including authentication cookies), if any, associated with the site running the IBM WebSphere software, access data recently submitted by the target user via web form to the site, or take actions on the site acting as the target user.

  9. Intelligent Bioreactor Management Information System (IBM-IS) for Mitigation of Greenhouse Gas Emissions

    SciTech Connect (OSTI)

    Paul Imhoff; Ramin Yazdani; Don Augenstein; Harold Bentley; Pei Chiu

    2010-04-30

    Methane is an important contributor to global warming with a total climate forcing estimated to be close to 20% that of carbon dioxide (CO2) over the past two decades. The largest anthropogenic source of methane in the US is 'conventional' landfills, which account for over 30% of anthropogenic emissions. While controlling greenhouse gas emissions must necessarily focus on large CO2 sources, attention to reducing CH4 emissions from landfills can result in significant reductions in greenhouse gas emissions at low cost. For example, the use of 'controlled' or bioreactor landfilling has been estimated to reduce annual US greenhouse emissions by about 15-30 million tons of CO2 carbon (equivalent) at costs between $3-13/ton carbon. In this project we developed or advanced new management approaches, landfill designs, and landfill operating procedures for bioreactor landfills. These advances are needed to address lingering concerns about bioreactor landfills (e.g., efficient collection of increased CH4 generation) in the waste management industry, concerns that hamper bioreactor implementation and the consequent reductions in CH4 emissions. Collectively, the advances described in this report should result in better control of bioreactor landfills and reductions in CH4 emissions. Several advances are important components of an Intelligent Bioreactor Management Information System (IBM-IS).

  10. ISTUM PC: industrial sector technology use model for the IBM-PC

    SciTech Connect (OSTI)

    Roop, J.M.; Kaplan, D.T.

    1984-09-01

    A project to improve and enhance the Industrial Sector Technology Use Model (ISTUM) was originated in the summer of 1983. The project had dix identifiable objectives: update the data base; improve run-time efficiency; revise the reference base case; conduct case studies; provide technical and promotional seminars; and organize a service bureau. This interim report describes which of these objectives have been met and which tasks remain to be completed. The most dramatic achievement has been in the area of run-time efficiency. From a model that required a large proportion of the total resources of a mainframe computer and a great deal of effort to operate, the current version of the model (ISTUM-PC) runs on an IBM Personal Computer. The reorganization required for the model to run on a PC has additional advantages: the modular programs are somewhat easier to understand and the data base is more accessible and easier to use. A simple description of the logic of the model is given in this report. To generate the necessary funds for completion of the model, a multiclient project is proposed. This project will extend the industry coverage to all the industrial sectors, including the construction of process flow models for chemicals and petroleum refining. The project will also calibrate this model to historical data and construct a base case and alternative scenarios. The model will be delivered to clients and training provided. 2 references, 4 figures, 3 tables.

  11. T-559: Stack-based buffer overflow in oninit in IBM Informix Dynamic Server (IDS) 11.50 allows remote execution

    Broader source: Energy.gov [DOE]

    Stack-based buffer overflow in oninit in IBM Informix Dynamic Server (IDS) 11.50 allows remote execution attackers to execute arbitrary code via crafted arguments in the USELASTCOMMITTED session environment option in a SQL SET ENVIRONMENT statement

  12. Study of Even-Even/Odd-Even/Odd-Odd Nuclei in Zn-Ga-Ge Region in the Proton-Neutron IBM/IBFM/IBFFM

    SciTech Connect (OSTI)

    Yoshida, N.; Brant, S.; Zuffi, L.

    2009-08-26

    We study the even-even, odd-even and odd-odd nuclei in the region including Zn-Ga-Ge in the proton-neutron IBM and the models derived from it: IBM2, IBFM2, IBFFM2. We describe {sup 67}Ga, {sup 65}Zn, and {sup 68}Ga by coupling odd particles to a boson core {sup 66}Zn. We also calculate the beta{sup +}-decay rates among {sup 68}Ge, {sup 68}Ga and {sup 68}Zn.

  13. IBM Blue Gene Architecture

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    How to use Open|SpeedShop to Analyze the Performance of Parallel Codes. Donald Frederick LLNL LLNL---PRES---508651 Performance Analysis is becoming more important ± Complex architectures ± Complex applications ± Mapping applications onto architectures Often hard to know where to start ± Which experiments to run first? ± How to plan follow---on experiments? ± What kind of problems can be explored? ± How to interpret the data? How to use OSS to Analyze the Performance of Parallel Codes? 2

  14. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

    SciTech Connect (OSTI)

    Zhou, Shujia; Duffy, Daniel; Clune, Thomas; Suarez, Max; Williams, Samuel; Halem, Milton

    2009-01-10

    The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratio of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.

  15. Niek Lopes Cardozo Guido Lange Gert-Jan Kramer (Shell Global Solutions).

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Nicola E. Ohaebgu About Us Nicola E. Ohaebgu - Lead Small Business Specialist Nicola E. Ohaebgu Nicola Ohaebgu is a Lead Small Business Specialist with the Office of Small and Disadvantaged Business Utilization. Before joining the Department of Energy, Nicola worked for the U.S. Army Medical Research Material Command - Office of Small Business Programs as a Deputy Assistant Director and Small Business Specialist for over three years. Prior to this, Nicola spent six years functioning as a

  16. IBM Probes Material Capabilities at the ALS

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and temperature-dependent x-ray absorption spectroscopy experiments, in conjunction with x-ray diffraction and electrical transport measurements. The researchers were able to...

  17. U-179: IBM Java 7 Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    Vulnerabilities can be exploited by malicious users to disclose certain information and by malicious people to disclose potentially sensitive information, hijack a user's session, conduct DNS cache poisoning attacks, manipulate certain data, cause a DoS (Denial of Service), and compromise a vulnerable system.

  18. U-139: IBM Tivoli Directory Server Input Validation Flaw

    Broader source: Energy.gov [DOE]

    The Web Admin Tool does not properly filter HTML code from user-supplied input before displaying the input.

  19. V-118: IBM Lotus Domino Multiple Vulnerabilities | Department...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    to version 9.0 or update to version 8.5.3 Fix Pack 4 when available Addthis Related Articles T-534: Vulnerability in the PDF distiller of the BlackBerry Attachment Service...

  20. T-694: IBM Tivoli Federated Identity Manager Products Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    This Security Alert addresses a serious security issue CVE-2010-4476 (Java Runtime Environment hangs when converting "2.2250738585072012e-308" to a binary floating-point number). This vulnerability might cause the Java Runtime Environment to hang, be in infinite loop, and/or crash resulting in a denial of service exposure. This same hang might occur if the number is written without scientific notation (324 decimal places). In addition to the Application Server being exposed to this attack, any Java program using the Double.parseDouble method is also at risk of this exposure including any customer written application or third party written application.

  1. V-161: IBM Maximo Asset Management Products Java Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    Asset and Service Mgmt Products - Potential security exposure when using JavaTM based applications due to vulnerabilities in Java Software Developer Kits.

  2. Measurement of the Neutron Radius of 208Pb Through Parity-Violation...

    Office of Scientific and Technical Information (OSTI)

    ; Cusanno, Francesco ; Dalton, Mark ; De Leo, Raffaele ; De Jager, Cornelis ; Deconinck, ... Vincent ; Sutera, Concetta ; Tobias, William ; Troth, Wolfgang ; Urciuoli, Guido ; ...

  3. T-559: Stack-based buffer overflow in oninit in IBM Informix...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    exploit this vulnerability. The specific flaw exists within the oninit process bound to TCP port 9088 when processing the arguments to the USELASTCOMMITTED option in a SQL query....

  4. Lawrence Livermore and IBM Collaborate to Build New Brain-Inspired

    Energy Savers [EERE]

    2014) | Department of Energy Large Power Transformers and the U.S. Electric Grid Report Update (April 2014) Large Power Transformers and the U.S. Electric Grid Report Update (April 2014) The Office of Electricity Delivery and Energy Reliability has released an update to its 2012 Large Power Transformers and the U.S. Electric Grid report. The new report includes updated information about global electrical steel supply conditions and discusses the increased domestic production of large power

  5. U-096: IBM AIX TCP Large Send Offload Bug Lets Remote Users Deny Service

    Broader source: Energy.gov [DOE]

    A remote user can send a series of specially crafted TCP packets to trigger a kernel panic on the target system.

  6. T-722: IBM WebSphere Commerce Edition Input Validation Holes...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    recently submitted by the target user via web form to the site, or take actions on the ... recently submitted by the target user via web form to the site, or take actions on the ...

  7. Design procedure for pollutant loadings and impacts for highway stormwater runoff (IBM version) (for microcomputers). Software

    SciTech Connect (OSTI)

    Not Available

    1990-01-01

    The interactive computer program was developed to make a user friendly procedure for the personal computer for calculations and guidance to make estimations of pollutant loadings and impacts from highway stormwater runoff which are presented in the Publication FHWA-RD-88-006, Pollutant Loadings and Impacts from Highway Stormwater Runoff, Volume I: Design Procedure. The computer program is for the evaluation of the water quality impact from highway stormwater runoff to a lake or a stream from a specific highway site considering the necessary rainfall data and geographic site situation. The evaluation considers whether or not the resulting water quality conditions can cause a problem as indicated by the violations of water quality criteria or objectives.

  8. U.S. Department of Energy and IBM to Collaborate in Advancing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    TECHNOLOGIES OFFICE U.S. Department of Energy's Wind Program Funding in the United States: Workforce Development Projects Report Fiscal Years 2008 - 2014 WIND PROGRAM 1 Introduction Wind and Water Power Technologies Office The Wind and Water Power Technologies Office (WWPTO), within the U.S. Department of Energy's (DOE's) Office of Energy Efficiency and Renewable Energy (EERE), supports the development, deployment, and commercialization of wind and water power technologies. WWPTO works with a

  9. U-049: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    LaserJet Printers Unspecified Flaw Lets Remote Users Update Firmware with Arbitrary Code U-053: Linux kexec Bugs Let Local and Remote Users Obtain Potentially Sensitive Information

  10. Lawrence Livermore and IBM Collaborate to Build New Brain-Inspired...

    Energy Savers [EERE]

    ... TrueNorth was originally developed under the auspices of the Defense Advanced Research Projects Agency's (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics ...

  11. Survivability enhancement study for C/sup 3/I/BM (communications...

    Office of Scientific and Technical Information (OSTI)

    RELIABILITY; ELECTROMAGNETIC PULSES; COMMUNICATIONS; FEASIBILITY STUDIES; FIBER OPTICS; HARDENING; MILITARY EQUIPMENT; POWER SUPPLIES; PROGRESS REPORT; SURVIVAL TIME;...

  12. A Core Hole in the Southwestern Moat of the Long Valley Caldera...

    Open Energy Info (EERE)

    in water level, temperatures, and fluid chemistry. Authors Harold A. Wollenberg, Michael L. Sorey, Christopher D. Farrar, Art F. White, S. Flexser and L.C. Bartel Published...

  13. Local Imaging of High Mobility Two-Dimensional Electron Systems...

    Office of Scientific and Technical Information (OSTI)

    Authors: Pelliccione, M. ; Stanford U., Appl. Phys. Dept. SLAC UC, Santa Barbara ; Bartel, J. ; SLAC Stanford U., Phys. Dept. ; Sciambi, A. ; Stanford U., Appl. Phys. Dept. ...

  14. A Large Hadron Electron Collider at CERN (Journal Article) |...

    Office of Scientific and Technical Information (OSTI)

    M. ; Brookhaven ; Barber, D. ; Daresbury DESY Liverpool U. ; Bartels, J. ; Hamburg, Tech. U. ; Behnke, O. ; DESY ; Behr, J. ; DESY ; Belyaev, A.S. ; Rutherford...

  15. Search for: All records | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    Filter by Author Robbins, Joshua (21) Dingreville, Remi Philippe Michel (7) Voth, Thomas Eugene (7) Voth, Thomas E. (5) Robbins, Joshua H. (4) Bartel, Timothy James (3) Clark, ...

  16. Before the Subcommittees on Energy and Environment - House Committee...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Technology Testimony of Guido DeHoratiis, Acting Deputy Assistant Secretary for Oil and Gas, Office of Fossil Energy Before the Subcommittees on Energy and Environment - House...

  17. Tuning of the nucleation field in nanowires with perpendicular...

    Office of Scientific and Technical Information (OSTI)

    Authors: Kimling, Judith ; Gerhardt, Theo ; Kobs, Andr ; Vogel, Andreas ; Wintz, Sebastian ; Im, Mi-Young ; Fischer, Peter ; Oepen, Hans Peter ; Merkt, Ulrich ; Meier, Guido ...

  18. Search for: All records | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    ... Cataldo, Giacinto ; De Leo, Raffaele ; Giuliani, Fausto ; Gricia, Massimo ; Lagamba, ... Salvatore ; Garibaldi, Franco ; Iodice, Mauro ; Urciuoli, Guido ; Nilles, Michael A pair ...

  19. Blog Feed: Vehicles | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    and Renewable Energy Postdoctoral Research Awards. | Photo courtesy of Dr. Guido Bender, NREL. 10 Questions for a Materials Scientist: Brian Larsen Meet Brian Larsen, who is...

  20. Simulation of High-Resolution Magnetic Resonance Images on the IBM Blue Gene/L Supercomputer Using SIMRI

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Baum, K. G.; Menezes, G.; Helguera, M.

    2011-01-01

    Medical imaging system simulators are tools that provide a means to evaluate system architecture and create artificial image sets that are appropriate for specific applications. We have modified SIMRI, a Bloch equation-based magnetic resonance image simulator, in order to successfully generate high-resolution 3D MR images of the Montreal brain phantom using Blue Gene/L systems. Results show that redistribution of the workload allows an anatomically accurate 256 3 voxel spin-echo simulation in less than 5 hours when executed on an 8192-node partition of a Blue Gene/L system.

  1. Nuclear matrix elements for 0??{sup ?}?{sup ?} decays: Comparative analysis of the QRPA, shell model and IBM predictions

    SciTech Connect (OSTI)

    Civitarese, Osvaldo; Suhonen, Jouni

    2013-12-30

    In this work we report on general properties of the nuclear matrix elements involved in the neutrinoless double ?{sup ?} decays (0??{sup ?}?{sup ?} decays) of several nuclei. A summary of the values of the NMEs calculated along the years by the Jyvskyl-La Plata collaboration is presented. These NMEs, calculated in the framework of the quasiparticle random phase approximation (QRPA), are compared with those of the other available calculations, like the Shell Model (ISM) and the interacting boson model (IBA-2)

  2. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    SciTech Connect (OSTI)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  3. DOE's Shale Gas and Hydraulic Fracturing Research

    Broader source: Energy.gov [DOE]

    Statement of Guido DeHoratiis Acting Deputy Assistant Secretary for Oil and Natural Gas before the House Committee on Science, Space, and Technology Subcommittees on Energy and Environment

  4. Search for: All records | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    Abdo, Aous A. (2) Ackermann, M. (2) Ajello, Marco (2) Atwood, William B. (2) Baldini, L. (2) Ballet, J. (2) Barbiellini, Guido (2) Bastieri, Denis (2) Baughman, B.M. (2) Bechtol, ...

  5. Before the Subcommittees on Energy and Environment- House Committee on Science, Space, and Technology

    Broader source: Energy.gov [DOE]

    Subject: Interagency Working Group to Support Safe and Responsible Development of Unconventional Domestic Natural Gas Resources By: Guido DeHoratiis, Acting Deputy Assistant Secretary for Oil and Gas, Office of Fossil Energy

  6. 10 Questions for a Materials Scientist: Brian Larsen | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Brian Larsen 10 Questions for a Materials Scientist: Brian Larsen January 24, 2013 - 10:50am Addthis Brian Larsen is developing the next generation of fuel cell catalysts thanks to the Energy Efficiency and Renewable Energy Postdoctoral Research Awards. | Photo courtesy of Dr. Guido Bender, NREL. Brian Larsen is developing the next generation of fuel cell catalysts thanks to the Energy Efficiency and Renewable Energy Postdoctoral Research Awards. | Photo courtesy of Dr. Guido Bender, NREL.

  7. Nicholas Donofrio

    Broader source: Energy.gov [DOE]

    Nicholas M. Donofrio is a 44-year IBM veteran who led IBM's technology and innovation strategies from 1997 until his retirement in October 2008. He also was vice chairman of the IBM...

  8. Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30%

    Broader source: Energy.gov [DOE]

    Remember when IBMs super computer Watson defeated Jeopardy! champions Ken Jennings and Brad Rutter? With funding from the U.S. Department of Energy SunShot Initiative, IBM researchers are using...

  9. The Cell Processor

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2005 IBM Corporation The Cell Processor Architecture & Issues 2 2005 IBM Corporation Agenda Cell Processor Overview Programming the Cell Processor Concluding Remarks 3 ...

  10. JC3 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    were reported in HP Service Manager April 30, 2013 V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities IBM Tivoli Federated Identity Manager...

  11. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    were reported in HP Service Manager April 30, 2013 V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities IBM Tivoli Federated Identity Manager...

  12. DOE Providing Additional Supercomputing Resources to Study Hurricane...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    But by tapping NERSC's supercomputers, which include a 6,080-processor IBM supercomputer, an 888-processor IBM cluster computer, and a 720-processor Linux Networx cluster the ...

  13. Programming Challenges Abstracts | U.S. DOE Office of Science...

    Office of Science (SC) Website

    ... His responsibilities at IBM included leading IBM's research efforts in programming model, tools, and productivity in the PERCS project during 2002- 2007 as part of the DARPA High ...

  14. SCTutorial_PGAS_2011-2.pptx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... (gcc), IBM, SGI, MTU, and others * Titanium (Java based) - Compiler from Berkeley DARPA High Productivity Computer Systems (HPCS) language project: * X10 (based on Java, IBM) ...

  15. SCTutorial_PGAS_2012-8.pptx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... (gcc), IBM, SGI, MTU, and others * Titanium (Java based) - Compiler from Berkeley DARPA High Productivity Computer Systems (HPCS) language project: * X10 (based on Java, IBM) ...

  16. Local Imaging of High Mobility Two-Dimensional Electron Systems with

    Office of Scientific and Technical Information (OSTI)

    Virtual Scanning Tunneling Microscopy (Journal Article) | SciTech Connect Local Imaging of High Mobility Two-Dimensional Electron Systems with Virtual Scanning Tunneling Microscopy Citation Details In-Document Search Title: Local Imaging of High Mobility Two-Dimensional Electron Systems with Virtual Scanning Tunneling Microscopy Authors: Pelliccione, M. ; /Stanford U., Appl. Phys. Dept. /SLAC /UC, Santa Barbara ; Bartel, J. ; /SLAC /Stanford U., Phys. Dept. ; Sciambi, A. ; /Stanford U.,

  17. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM Compiler Optimization Options June 4, 2002 | Author(s): M. Stewart | Download File: optarg.ppt | ppt | 53 KB All of the IBM supplied compilers produce unoptimized code by...

  18. Driving Operational Changes through an Energy Monitoring System

    SciTech Connect (OSTI)

    2012-08-01

    Institutional change case study details IBM's corporate efficiency program focused on basic operation improvements in its diverse real estate operations.

  19. Motor Current Data Collection System

    Energy Science and Technology Software Center (OSTI)

    1992-12-01

    The Motor Current Data Collection System (MCDCS) uses IBM compatible PCs to collect, process, and store Motor Current Signature information.

  20. Disk Quota | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please...

  1. Magnetic soft x-ray microscopy of the domain wall depinning process in

    Office of Scientific and Technical Information (OSTI)

    permalloy magnetic nanowires (Journal Article) | SciTech Connect Magnetic soft x-ray microscopy of the domain wall depinning process in permalloy magnetic nanowires Citation Details In-Document Search Title: Magnetic soft x-ray microscopy of the domain wall depinning process in permalloy magnetic nanowires Authors: Im, Mi-Young ; Bocklage, Lars ; Meier, Guido ; Fischer, Peter Publication Date: 2011-10-27 OSTI Identifier: 1172969 Report Number(s): LBNL-5866E DOE Contract Number:

  2. Stochastic formation of magnetic vortex structures in asymmetric disks

    Office of Scientific and Technical Information (OSTI)

    triggered by chaotic dynamics (Journal Article) | SciTech Connect Stochastic formation of magnetic vortex structures in asymmetric disks triggered by chaotic dynamics Citation Details In-Document Search Title: Stochastic formation of magnetic vortex structures in asymmetric disks triggered by chaotic dynamics Authors: Im, Mi-Young ; Lee, Ki-Suk ; Vogel, Andreas ; Hong, Jung-Il ; Meier, Guido ; Fischer, Peter Publication Date: 2014-12-17 OSTI Identifier: 1167390 Report Number(s): LBNL-6890E

  3. Search for: All records | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    Urciuoli, Guido" Name Name ORCID Search Authors Type: All Book/Monograph Conference/Event Journal Article Miscellaneous Patent Program Document Software Manual Technical Report Thesis/Dissertation Subject: Identifier Numbers: Site: All Alaska Power Administration, Juneau, Alaska (United States) Albany Research Center (ARC), Albany, OR (United States) Albuquerque Complex - NNSA Albuquerque Operations Office, Albuquerque, NM (United States) Amarillo National Resource Center for Plutonium,

  4. The ExaChallenge Symposium

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    RC25406 (IRE1308-033) August 26, 2013 Other IBM Research Report The ExaChallenge Symposium Rolf Riesen IBM Research Smarter Cities Technology Centre Mulhuddart Dublin 15, Ireland Sudip Dosanjh LBNL/NERSC Larry Kaplan Cray, Inc. Research Division Almaden - Austin - Beijing - Cambridge - Dublin - Haifa - India - T. J. Watson - Tokyo - Zurich The ExaChallenge Symposium Rolf Riesen, IBM Research - Ireland Sudip Dosanjh, LBNL/NERSC Larry Kaplan, Cray Inc. October 16-18, 2012 Abstract The ExaChallenge

  5. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Presentations Presentations Sort by: Default | Name | Date (low-high) | Date (high-low) | Source | Category IBM Compiler Optimization Options June 4, 2002 | Author(s): M. Stewart | Download File: optarg.ppt | ppt | 53 KB All of the IBM supplied compilers produce unoptimized code by default. Specific optimization command line options must be supplied to the compilers in order for them to produce optimized code. In this talk, several of the more useful optimization options for the IBM Fortran, C,

  6. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM Compiler Optimization Options June 4, 2002 | Author(s): M. Stewart | Download File: optarg.ppt | ppt | 53 KB All of the IBM supplied compilers produce unoptimized code by default. Specific optimization command line options must be supplied to the compilers in order for them to produce optimized code. In this talk, several of the more useful optimization options for the IBM Fortran, C, and C++ compilers are described and recommendations will be given on which of them are most useful.

  7. DOE Announces Secretary of Energy Advisory Board | Department...

    Energy Savers [EERE]

    IBM Alexis Herman Former Secretary of Labor Chad Holliday, Jr. Former CEO of Dupont Michael McQuade Senior VP, United Technologies Corporation William Perry Former ...

  8. Secretary Chu Announces 150 Students to Receive Graduate Fellowships...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM Alexis Herman Former Secretary of Labor Chad Holliday, Jr. Former CEO of Dupont Michael McQuade Senior VP, United Technologies Corporation William Perry Former ...

  9. News Item

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    California, Santa Barbara Catherine Murphy, University of Illinois at Urbana-Champaign Frances Ross, IBM Ned Seeman, New York University Donald Tennant, Cornell Nanoscale Science...

  10. Area schools get new computers through Los Alamos National Laboratory...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Area schools get new computers Area schools get new computers through Los Alamos National Laboratory, IBM partnership Northern New Mexico schools are recipients of fully loaded...

  11. Bradbury Museum's supercomputing exhibit gets updated

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    exhibit gets updated The updated exhibit interactive displays, artifacts from early computers, vacuum tubes from the MANIAC computer, and unique IBM cell blades from Roadrunner....

  12. Institutional Change Process Step 4: Implement an Action Plan...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... Use these examples to think through how to implement institutional change. IBM: Driving Operation Changes through an Energy Monitoring System U.S. Fish and Wildlife Service: ...

  13. Armonk, New York: Energy Resources | Open Energy Information

    Open Energy Info (EERE)

    place in Westchester County, New York.1 Registered Energy Companies in Armonk, New York International Business Machines Corp IBM Windfarm Finance LLC References US Census...

  14. Richard Luis Martin | Center for Gas SeparationsRelevant to Clean...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Richard Luis Martin Previous Next List Martin Richard Luis Martin Formerly: Postdoctoral Research Fellow, Lawrence Berkeley National Laboratory Presently: Staff scientist , IBM PhD...

  15. SunShot Rooftop Challenge Awardees | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    enable multiple financing options for community solar programs. City University of New York City University of New York, NYC Department of Buildings, Procemx, CUNY Ventures, IBM,...

  16. ALSNews Vol. 350

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    is a somewhat futuristic material that could yield faster and much more energy-efficient electronic devices. IBM researchers from the company's forward-thinking Spintronic Science...

  17. U.S. Department of Energy Interim E-QIP Procedures | Department...

    Broader source: Energy.gov (indexed) [DOE]

    Energy Security Symposium OE Releases Second Issue of Energy Emergency Preparedness Quarterly (April 2012) V-147: IBM Lotus Notes Mail Client Lets Remote Users Execute Java Applets...

  18. Advance Patent Waiver W(A)2005-048

    Broader source: Energy.gov [DOE]

    This is a request by IBM BLUEGENE/P DESIGN, PHASE III for a DOE waiver of domestic and foreign patent rights under agreement W-7405-ENG-48.

  19. U. S. DEPARTMENT OF ENERGY OFFICE OF HEARINGS AND APPEALS DECISION AND ORDER

    Energy Savers [EERE]

    Vulnerability | Department of Energy 4: IBM Rational ClearQuest ActiveX Control Buffer Overflow Vulnerability U-154: IBM Rational ClearQuest ActiveX Control Buffer Overflow Vulnerability April 24, 2012 - 7:00am Addthis PROBLEM: IBM Rational ClearQuest ActiveX Control Buffer Overflow Vulnerability PLATFORM: Versions 7.1.1 through 7.1.2.5, 8.0, and 8.0.0.1. ABSTRACT: A vulnerability was reported in IBM Rational ClearQuest. A remote user can cause arbitrary code to be executed on the target

  20. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    have been reported in Apache HTTP Server July 29, 2013 V-205: IBM Tivoli System Automation for Multiplatforms Java Multiple Vulnerabilities The weakness and the...

  1. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    "blue screen of death" after installation. April 12, 2013 V-132: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities Multiple security vulnerabilities exist...

  2. JC3 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    have been reported in Apache HTTP Server July 29, 2013 V-205: IBM Tivoli System Automation for Multiplatforms Java Multiple Vulnerabilities The weakness and the...

  3. JC3 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    "blue screen of death" after installation. April 12, 2013 V-132: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities Multiple security vulnerabilities exist...

  4. EERE Success Story-Solar Forecasting Gets a Boost from Watson, Accuracy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Improved by 30% | Department of Energy Forecasting Gets a Boost from Watson, Accuracy Improved by 30% EERE Success Story-Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30% October 27, 2015 - 11:48am Addthis IBM Youtube Video | Courtesy of IBM Remember when IBM's super computer Watson defeated Jeopardy! champions Ken Jennings and Brad Rutter? With funding from the U.S. Department of Energy SunShot Initiative, IBM researchers are using Watson-like technology to improve solar

  5. Pete Beckman on Mira and Exascale

    ScienceCinema (OSTI)

    Pete Beckman

    2013-06-06

    Argonne's Pete Beckman, director of the Exascale Technology Computing Institute (ETCi), talks about the IBM Bluegene Q supercomputer and the future of computing and Exascale technology.

  6. JC3 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ESX and ESXi March 29, 2013 V-122: IBM Tivoli Application Dependency Discovery Manager Java Multiple Vulnerabilities Multiple security vulnerabilities exist in the Java Runtime...

  7. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ESX and ESXi March 29, 2013 V-122: IBM Tivoli Application Dependency Discovery Manager Java Multiple Vulnerabilities Multiple security vulnerabilities exist in the Java Runtime...

  8. V-215: NetworkMiner Directory Traversal and Insecure Library...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Addthis Related Articles U-198: IBM Lotus Expeditor Multiple Vulnerabilities U-146: Adobe ReaderAcrobat Multiple Vulnerabilities T-542: SAP Crystal Reports Server Multiple...

  9. Coreprocessor | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    below for using the Coreprocessor tool. References The Coreprocessor tool (IBM System Blue Gene Solution: Blue GeneQ System Administration, Chapter 22) Location The...

  10. Blue Gene/Q Network Performance Counters Monitoring Library

    Energy Science and Technology Software Center (OSTI)

    2015-03-12

    BGQNCL is a library to monitor and record network performance counters on the 5D torus interconnection network of IBM's Blue Gene/Q platform.

  11. Overview

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Vermont Regional Test Center (RTC) is situated in the town of Williston, Vermont (outside of Burlington), adjacent to IBM's semiconductor facility. Located on flat, unshaded land, ...

  12. ALSNews Vol. 350

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM researchers from the company's forward-thinking Spintronic Science and Applications Center recently used the ALS to gain greater insight into vanadium dioxide's unusual phase ...

  13. Advance Patent Waiver W(A)2005-014

    Broader source: Energy.gov [DOE]

    This is a request by IBM for a DOE waiver of domestic and foreign patent rights under agreement W-7405-ENG-48.

  14. Department of Energy Awards $425 Million for Next Generation...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Both CORAL awards leverage the IBM Power Architecture, NVIDIA's Volta GPU and Mellanox's ... New Brain-Inspired Supercomputer: Chip-architecture breakthrough accelerates path to ...

  15. Queueing & Running Jobs | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Running on BGQ Systems Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please...

  16. Microsoft PowerPoint - NERSC-NUG-yukiko-08

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Research and Evaluation Prototypes will continue to support the DARPA HPCS partnership's development of the Cray architecture and, in partnership with NNSA and IBM, support the ...

  17. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC...

    Office of Scientific and Technical Information (OSTI)

    Intel Institute for Defense Analyses University of California, San Diego IBM DARPA NVIDIA University of Tennessee Oak Ridge National Laboratory Lawrence Livermore ...

  18. Multi Platform Graphics Subroutine Library

    Energy Science and Technology Software Center (OSTI)

    1992-02-21

    DIGLIB is a collection of general graphics subroutines. It was designed to be small, reasonably fast, device-independent, and compatible with DEC-supplied operating systems for VAXes, PDP-11s, and LSI-11s, and the DOS operating system for IBM PCs and IBM-compatible machines. The software is readily usable by casual programmers for two-dimensional plotting.

  19. Geometry of coexistence in the interacting boson model

    SciTech Connect (OSTI)

    Van Isacker, P.; Frank, A.; Vargas, C.E.

    2004-09-13

    The Interacting Boson Model (IBM) with configuration mixing is applied to describe the phenomenon of coexistence in nuclei. The analysis suggests that the IBM with configuration mixing, used in conjunction with a (matrix) coherent-state method, may be a reliable tool for the study of geometric aspects of shape coexistence in nuclei.

  20. Magnetic soft x-ray microscopy of the

    Office of Scientific and Technical Information (OSTI)

    soft x-ray microscopy of the domain wail depinning process in permalloy magnetic nanowires Mi-Young Im ', Lars Bocklage^, Guido Meier^ and Peter Fischer' '^ C e ir te r -f o rX ^ a y Optics, Lawrence Berkeley N ational Laboratory, Berkeley, CA 94720, USA ^ Institut fiir A ngew andte Physik und Zentrum ftir M ikrostrukturforschung, U niversitat Hamburg, Jungiusstrasse 11, 20355 Hamburg, Germany Abstract F u ll-fie ld m a g n e tic tra n sm iss io n x -ra y m ic ro sc o p y at h ig h sp a tia l

  1. Stochastic formation of magnetic vortex

    Office of Scientific and Technical Information (OSTI)

    Stochastic formation of magnetic vortex structures in asymmetric disks triggered by chaotic dynamics Mi-Young Im,1,4* Ki-Suk Lee,2* Andreas Vogel,3 Jung-Il Hong,4 Guido Meier,3,5 and Peter Fischer1,6 1Center for X-ray Optics, Lawrence Berkeley National Laboratory, Berkeley CA 94720, USA 2School of Mechanical and Advanced Materials Engineering, Ulsan National Institute of Science and Technology, Ulsan, Korea 3Institut fur Angewandte Physik und Zentrum fur Mikrostrukturforschung, Universitat

  2. Helicity evolution at small-x

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Kovchegov, Yuri V.; Pitonyak, Daniel; Sievert, Matthew D.

    2016-01-13

    We construct small-x evolution equations which can be used to calculate quark and anti-quark helicity TMDs and PDFs, along with the g1 structure function. These evolution equations resum powers of αs ln2(1/x) in the polarization-dependent evolution along with the powers of αs ln(1/x) in the unpolarized evolution which includes saturation efects. The equations are written in an operator form in terms of polarization-dependent Wilson line-like operators. While the equations do not close in general, they become closed and self-contained systems of non-linear equations in the large-Nc and large-Nc & Nf limits. As a cross-check, in the ladder approximation, our equationsmore » map onto the same ladder limit of the infrared evolution equations for g1 structure function derived previously by Bartels, Ermolaev and Ryskin.« less

  3. PII: S0368-2048(98)00286-2

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Liquid crystal alignment by rubbed polymer surfaces: a microscopic bond orientation model J. Sto ¨hr * , M.G. Samant IBM Research Division, Almaden Research Center, 650 Harry Road, San Jose, CA 95120 USA Dedication by J. Sto ¨hr - This paper is dedicated to Dick Brundle who for many years was my colleage at the IBM Almaden Research Center. Dick was responsible for my hiring by IBM, and over the years we interacted with each other in many roles - as each other's boss or simply as colleagues.

  4. Bassi_intro_NUG06.ppt

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Bassi IBM POWER 5 p575 Richard Gerber NERSC User Services Group RAGerber@lbl.gov June 13, NUG @ Princeton Plasma Physics Lab NUG June 13, 2006 Princeton Plasma Physics Lab About Bassi Bassi is an IBM p575 POWER 5 cluster * It is a distributed memory computer, with 111 single-core 8-way SMP compute nodes. * 888 processors are available to run scientific computing applications. * Each node has 32 GB of memory. * The nodes are connected by IBM's proprietary HPS network. * It is named in honor of

  5. A2 Processor User's Manual for Blue Gene/Q

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A2 Processor User's Manual for Blue Gene/Q Note: This document and the information it contains are provided on an as-is basis. There is no plan for providing for future updates and corrections to this document. October 23, 2012 Version 1.3 Title Page ® Copyright and Disclaimer © Copyright International Business Machines Corporation 2010, 2012 Printed in the United States of America October 2012 IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business

  6. QPX Architecture

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    QPX Architecture Quad Processing eXtension to the Power ISA TM May 9, 2012 Thomas Fox foxy@us.ibm.com QPX Architecture 2 Chapter 1. Quad-Vector Floating-Point Facility Overview This document defines the Quad-Processing eXtension (QPX) to IBM's Power Instruction Set Architecture. Refer to IBM's Power ISA TM AS architecture document for descriptions of the base Power instruction set, the storage model, and related facilities available to the application programmer. The computational model of the

  7. Timeline of Events: 2001 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    1 Timeline of Events: 2001 August 15, 2001: IBM's ASCI White August 15, 2001: IBM's ASCI White Lawrence Livermore National Laboratory dedicates the "world's fastest supercomputer," the IBM ASCI White. Read more June 28, 2001: President Bush announces $85.7 million in Federal grants June 28, 2001: President Bush announces $85.7 million in Federal grants President Bush speaks to employees at DOE's Forrestal building in Washington, D.C. announcing $85.7 million in Federal grants. Read

  8. 2011 NERSC User Survey (Read Only)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sun Solaris IBM AIX HP HPUX SGI IRIX Other PC Systems Windows 7 Windows Vista Windows XP Windows 2000 Other Windows Mac Systems MacOS X MacOS 9 or earlier Other Mac Other...

  9. Search for: All records | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    ... Supercomputers, such as IBM Blue GeneL and Cray XT3, will soon make tens to hundreds of ... processing technologies in systems with tens of thousands of processing cores, it is ...

  10. through Los Alamos National

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Area schools get new computers through Los Alamos National Laboratory, IBM partnership May 8, 2009 LOS ALAMOS, New Mexico, May 8, 2009-Thanks to a partnership between Los Alamos...

  11. Progress

    Office of Scientific and Technical Information (OSTI)

    H5Part, which is oriented to the needs of the particle physics and cosmology communities, ... and provide data showing per- formance on modern supercomputer architectures like the IBM ...

  12. STRIPESPDSlidesApril_2010.pdf | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    STRIPESPDSlidesApril2010.pdf STRIPESPDSlidesApril2010.pdf PDF icon STRIPESPDSlidesApril2010.pdf More Documents & Publications The document title is Arial, 32-point bold. IBM...

  13. Infrastructure Institutional Change Principle | Department of...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    IBM used the infrastructure behavior change principle to adjust its operational and ... Top-of-the-line meters offered vital data on how and when facilities use energy. This ...

  14. Cetus and Vesta | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    jobs in order to debug problems that occurred on Mira. Cetus System Configuration Architecture: IBM BGQ Processor: 16 1600 MHz PowerPC A2 cores Cabinets: 4 Nodes: 4,096 Cores...

  15. Solar Forecasting Gets a Boost from Watson, Accuracy Improved...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30% Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30% October 27, 2015 - 11:48am Addthis IBM ...

  16. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    code. In this talk, several of the more useful optimization options for the IBM Fortran, C, and C++ compilers are described and recommendations will be given on which of...

  17. 2000 User Survey Results

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    "NERSC has been the most stable supercomputer center in the country particularly with the migration from the T3E to the IBM SP". "Makes supercomputing easy." Below are the survey...

  18. Introducing Mira, Argonne's Next-Generation Supercomputer

    SciTech Connect (OSTI)

    2013-03-19

    Mira, the new petascale IBM Blue Gene/Q system installed at the ALCF, will usher in a new era of scientific supercomputing. An engineering marvel, the 10-petaflops machine is capable of carrying out 10 quadrillion calculations per second.

  19. Blog Feed: Vehicles | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    from the latest Clean Energy Jobs Roundup. August 7, 2012 Principal Deputy Director Eric Toone, former ARPA-E Director Arun Majumdar, the Honorable Bart Gordon and IBM Research...

  20. ASC_machines_cielo_2

    National Nuclear Security Administration (NNSA)

    BlueGeneL * First low-power "green" supercomputer * 360 teraFLOPS * 1 TOP500 1104-608 * IBM Roadrunner * First to break the petaFLOPS barrier * Power-e cient hybrid computer ...

  1. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    was reported in IBM Tivoli Federated Identity Manager. January 18, 2013 V-072: Red Hat update for java-1.7.0-openjdk Red Hat has issued an update for java-1.7.0-openjdk....

  2. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Status reports: IBM SP Phase 2 plans, NERSC-4 plans, NERSC-2 decommissioning February 22, 2001 | Author(s): Bill Kramer | Download File: Kramer.Status.Plans.Feb2001.ppt | ppt | 6.8 ...

  3. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Status reports: IBM SP Phase 2 plans, NERSC-4 plans, NERSC-2 decommissioning February 22, 2001 | Author(s): Bill Kramer | Download File: Kramer.Status.Plans.Feb2001.ppt | ppt | 6.8

  4. Cosmological Simulations for Large-Scale Sky Surveys | Argonne...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    on all HPC systems. In particular, on the IBM BGQ system, HACC has reached very high levels of performance-almost 14 petaflops (the highest ever recorded by a science code)...

  5. Microsoft PowerPoint - Salishan 2005Adolfyweb

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Work funded by ASC, Office of Science, DARPA CCS-3 P A L CESC, April 2005, Washington DC ... Examine possible future systems - e.g. IBM PERCS (DARPA HPCS), BlueGeneP, ... ? Recent ...

  6. News Item

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Ecosystem Location: 67-3111 Chemla Room Abstract: Over the past 6 years as part of the DARPA SyNAPSE program, IBM's Brain Inspired Computing group has created an end-to-end ...

  7. Computing Resources | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. ... Mira, our 10-petaflops IBM Blue GeneQ supercomputer, is the engine that drives scientific ...

  8. U-048: HP LaserJet Printers Unspecified Flaw Lets Remote Users...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    T-699: EMC AutoStart Buffer Overflows Let Remote Users Execute Arbitrary Code U-049: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the Target System...

  9. Item Management Control System

    Energy Science and Technology Software Center (OSTI)

    1993-08-06

    The Item Management Control System (IMCS) has been developed at Idaho National Engineering Laboratory to assist in organizing collections of documents using an IBM-PC or similar DOS system platform.

  10. DuncanRickoverandtheNuclearNavyPicturesOnly.pdf

    Energy Savers [EERE]

    Department of Energy Driving Operational Changes Through an Energy Monitoring System Driving Operational Changes Through an Energy Monitoring System Fact sheet describes a case study of IBM's corporate energy efficiency monitoring program that focuses on basic improvements in its real estate operations. PDF icon ic_ibm.pdf More Documents & Publications Driving Operational Changes Through an Energy Monitoring System Data, Feedback, and Awareness Lead to Big Energy Savings Connecting

  11. INCITE Program Doles Out Hours on Supercomputers | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    INCITE Program Doles Out Hours on Supercomputers INCITE Program Doles Out Hours on Supercomputers November 5, 2012 - 1:30pm Addthis Mira, the 10-petaflop IBM Blue Gene/Q system at Argonne National Laboratory, is capable of carrying out 10 quadrillion calculations per second. Each year researchers apply to the INCITE program to get to use this machine's incredible computing power. | Photo courtesy of Argonne National Lab. Mira, the 10-petaflop IBM Blue Gene/Q system at Argonne National

  12. Microsoft PowerPoint - 2009 04 Salishan prog models CLEAN-Stunkel [Compatibility Mode]

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Salishan conference, April 2009 Impacts of Energy Efficiency on Supercomputer Programming Models Craig Stunkel, IBM Research IBM Research What is a programming model? What is a programming model? A programming model is a May be realized through one or A programming model is a story - A common conceptual framework - Used by application May be realized through one or more of: * Libraries * Language/compiler extensions - pragmas, - Used by application developers, algorithm designers,

  13. Driving Operational Changes Through an Energy Monitoring System |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Driving Operational Changes Through an Energy Monitoring System Driving Operational Changes Through an Energy Monitoring System Fact sheet describes a case study of IBM's corporate energy efficiency monitoring program that focuses on basic improvements in its real estate operations. PDF icon ic_ibm.pdf More Documents & Publications Driving Operational Changes Through an Energy Monitoring System Data, Feedback, and Awareness Lead to Big Energy Savings Connecting

  14. Buildings Energy Data Book: 5.7 Appliances

    Buildings Energy Data Book [EERE]

    3 2007 Personal Computer Manufacturer Market Shares (Percent of Products Produced) Desktop Computer Portable Computer Company Market Share (%) Market Share (%) Dell 32% 25% Hewlett-Packard 24% 26% Gateway 5% 4% Apple 4% 9% Acer America 3% N/A IBM 1% N/A Micron 0% N/A Toshiba N/A 12% Levono (IBM) N/A 6% Sony N/A 5% Fujitsu Siemens N/A 1% Others 30% 13% Total 100% 100% Note(s): Source(s): Total Desktop Computer Units Shipped: 34,211,601 Total Portable Computer Units Shipped: 30,023,844

  15. Weaving New York's Solar Industry Web | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Weaving New York's Solar Industry Web Weaving New York's Solar Industry Web June 29, 2010 - 11:00am Addthis Solar films are manufactured at Precision Flow Technologies in Kingston, N.Y., facility. The factory once served as an IBM plant. | Photo Courtesy of Kevin Brady Solar films are manufactured at Precision Flow Technologies in Kingston, N.Y., facility. The factory once served as an IBM plant. | Photo Courtesy of Kevin Brady Stephen Graff Former Writer & editor for Energy Empowers, EERE

  16. PII: S0304-8853(99)00407-2

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Tel.: #001-408-927-2461; fax: #001-408-927-2100. E-mail address: stohr@almaden.ibm.com (J. Sto K hr) Journal of Magnetism and Magnetic Materials 200 (1999) 470}497 Exploring the microscopic origin of magnetic anisotropies with X-ray magnetic circular dichroism (XMCD) spectroscopy J. Sto K hr* IBM Research Division, Almaden Research Center, 650 Harry Road, San Jose, CA 95120-6099, USA Received 11 February 1999; received in revised form 13 April 1999 Abstract Symmetry breaking and bonding at

  17. Interacting boson model from energy density functionals: {gamma}-softness and the related topics

    SciTech Connect (OSTI)

    Nomura, K.

    2012-10-20

    A comprehensive way of deriving the Hamiltonian of the interacting boson model (IBM) is described. Based on the fact that the multi-nucleon induced surface deformation in finite nucleus is simulated by effective boson degrees of freedom, the potential energy surface calculated with self-consistent mean-field method employing a given energy density functional (EDF) is mapped onto the IBM analog, and thereby the excitation spectra and transition rates with good symmetry quantum numbers are calculated. Recent applications of the proposed approach are reported: (i) an alternative robust interpretation of the {gamma}-soft nuclei and (ii) shape coexistence in lead isotopes.

  18. heat_ghc02

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2 3 4 5 6 7 8 0 2 4 6 8 10 20 30 40 50 60 70 Expansion Level Calculation Time (sec) IBM SP L2, 64PE oooo L2, 16PE ***** L1, 64PE xxxx L1, 16PE (a) 0 2 4 6 8 0.2 0.4...

  19. The quest data mining system

    SciTech Connect (OSTI)

    Agrawal, R.; Mehta, M.; Shafer, J.; Srikant, R.

    1996-12-31

    The goal of the Quest project at the IBM Almaden Research center is to develop technology to enable a new breed of data-intensive decision-support applications. This paper is a capsule summary of the current functionality and architecture of the Quest data mining System.

  20. Intellectual Property (IP) Service Providers for Acquisition and Assistance

    Energy Savers [EERE]

    Transactions | Department of Energy DOE_IP_Counsel_for_DOE_Laboratories 2015 More Documents & Publications Intellectual Property (IP) Service Providers for Acquisition and Assistance Transactions WA_05_056_IBM_WATSON_RESEARCH_CENTER_Waiver_of_Domestic_and_.pdf Identified Patent Waiver W(I)2012-009

  1. July 28, 2010, Partnerships of academia, industry, and government labs

    Energy Savers [EERE]

    UNCLASSIFIED UNCLASSIFIED * Interdisciplinary nature of research * Rapid transition from research to products One size does not fit all Partnerships of academia, industry, and government labs UNCLASSIFIED UNCLASSIFIED Network Science Collaborative Technology Alliance: an Interdisciplinary Collaboration Model Social/Cognitive Network ARC * Principal Member - Rensselaer Polytechnic Institute * General Members - CUNY, Northeastern Univ, IBM Communication Networks ARC * Principal Member - Penn State

  2. New Advances in Neutrinoless Double Beta Decay Matrix Elements

    SciTech Connect (OSTI)

    Munoz, Jose Barea [Instituto de Estructura de la Materia, C.S.I.C. Unidad Asociada al Departamento de Fisica Atomica, Molecular y Nuclear, Facultad de Fisica, Universidad de Sevilla, Apartado 1065, 41080 Sevilla (Spain)

    2010-08-04

    We present the matrix elements necessary to evaluate the half-life of some neutrinoless double beta decay candidates in the framework of the microscopic interacting boson model (IBM). We compare our results with those from other models and extract some simple features of the calculations.

  3. Neutrinoless double beta decay in the microscopic interacting boson model

    SciTech Connect (OSTI)

    Iachello, F. [Center for Theoretical Physics, Sloane Physics Laboratory Yale University New Haven, CT 06520-8120 (United States)

    2009-11-09

    The results of a calculation of the nuclear matrix elements for neutrinoless double beta decay in the closure approximation in several nuclei within the framework of the microscopic interacting boson model (IBM-2) are presented and compared with those calculated in the shell model (SM) and quasiparticle random phase approximation (QRPA)

  4. MS FORTRAN Extended Libraries

    Energy Science and Technology Software Center (OSTI)

    1986-09-01

    DISPPAK is a set of routines for use with Microsoft FORTRAN programs that allows the flexible display of information on the screen of an IBM PC in both text and graphics modes. The text mode routines allow the cursor to be placed at an arbitrary point on the screen and text to be displayed at the cursor location, making it possible to create menus and other structured displays. A routine to set the color ofmore » the characters that these routines display is also provided. A set of line drawing routines is included for use with IBM''s Color Graphics Adapter or an equivalent board (such as the Enhanced Graphics Adapter in CGA emulation mode). These routines support both pixel coordinates and a user-specified set of real number coordinates. SUBPAK is a function library which allows Microsoft FORTRAN programs to calculate random numbers, issue calls to the operating system, read individual characters from the keyboard, perform Boolean and shift operations, and communicate with the I/O ports of the IBM PC. In addition, peek and poke routines, a routine that returns the address of any variable, and routines that can access the system time and date are included.« less

  5. NUG2013UserSurvey.pptx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    --- NUG 2 013 --- 1 0 --- Advanced Architectures & Programming Models Architecture GPUs MulT Threaded MIC IBM C ell Big M PP 22.4% 9.2% 5.1% 3.1% Medium M PP 20.3% 14.2% 3.6% 2.5%...

  6. History of Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    208 16 3,328 3,328 1.0 GB IBM Colony 3,052 (2) 4,992 JWatson 1999 Cray Y-MP J90 Cray CMOS 100 MHz 1 32 32 8 250 MB SMP 6.4 FCrick 1999 Cray Y-MP J90 Cray CMOS 100 MHz 1 32 32 8...

  7. Project Final Report: HPC-Colony II

    SciTech Connect (OSTI)

    Jones, Terry R; Kale, Laxmikant V; Moreira, Jose

    2013-11-01

    This report recounts the HPC Colony II Project which was a computer science effort funded by DOE's Advanced Scientific Computing Research office. The project included researchers from ORNL, IBM, and the University of Illinois at Urbana-Champaign. The topic of the effort was adaptive system software for extreme scale parallel machines. A description of findings is included.

  8. EIA directory of electronic products. First quarter 1994

    SciTech Connect (OSTI)

    Not Available

    1994-04-01

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers.

  9. EIA directory of electronic products, Third quarter 1995

    SciTech Connect (OSTI)

    1996-02-01

    EIA makes available for public use a series of machine-readable data files and computer models on magnetic tapes. Selected data files/models are also available on diskette for IBM-compatible personal computers. For each product listed in this directory, a detailed abstract is provided which describes the data published. Ordering information is given in the preface. Indexes are included.

  10. EIA directory of electronic products. Fourth quarter 1995

    SciTech Connect (OSTI)

    1996-08-01

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers.

  11. Opportunities for high aspect ratio micro-electro-magnetic-mechanical systems (HAR-MEMMS) at Lawrence Berkeley Laboratory

    SciTech Connect (OSTI)

    Hunter, S.

    1993-10-01

    This report contains viewgraphs on the following topics: Opportunities for HAR-MEMMS at LBL; Industrial Needs and Opportunities; Deep Etch X-ray Lithography; MEMS Activities at BSAC; DNA Amplification with Microfabricated Reaction Chamber; Electrochemistry Research at LBL; MEMS Activities at LLNL; Space Microsensors and Microinstruments; The Advanced Light Source; Institute for Micromaching; IBM MEMS Interests; and Technology Transfer Opportunities at LBL.

  12. Electromagnetic Reciprocity.

    SciTech Connect (OSTI)

    Aldridge, David F.

    2014-11-01

    A reciprocity theorem is an explicit mathematical relationship between two different wavefields that can exist within the same space - time configuration. Reciprocity theorems provi de the theoretical underpinning for mod ern full waveform inversion solutions, and also suggest practical strategies for speed ing up large - scale numerical modeling of geophysical datasets . In the present work, several previously - developed electromagnetic r eciprocity theorems are generalized to accommodate a broader range of medi um, source , and receiver types. Reciprocity relations enabling the interchange of various types of point sources and point receivers within a three - dimensional electromagnetic model are derived. Two numerical modeling algorithms in current use are successfully tested for adherence to reciprocity. Finally, the reciprocity theorem forms the point of departure for a lengthy derivation of electromagnetic Frechet derivatives. These mathe matical objects quantify the sensitivity of geophysical electromagnetic data to variatio ns in medium parameters, and thus constitute indispensable tools for solution of the full waveform inverse problem. ACKNOWLEDGEMENTS Sandia National Labor atories is a multi - program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000. Signif icant portions of the work reported herein were conducted under a Cooperative Research and Development Agreement (CRADA) between Sandia National Laboratories (SNL) and CARBO Ceramics Incorporated. The author acknowledges Mr. Chad Cannan and Mr. Terry Pa lisch of CARBO Ceramics, and Ms. Amy Halloran, manager of SNL's Geophysics and Atmospheric Sciences Department, for their interest in and encouragement of this work. Special thanks are due to Dr . Lewis C. Bartel ( recently retired from Sandia National Labo ratories and now a geophysical consultant ) and Dr. Chester J. Weiss (recently rejoined with Sandia National Laboratories) for many stimulating (and reciprocal!) discussions regar ding the topic at hand.

  13. 3081/E processor

    SciTech Connect (OSTI)

    Kunz, P.F.; Gravina, M.; Oxoby, G.; Rankin, P.; Trang, Q.; Ferran, P.M.; Fucci, A.; Hinton, R.; Jacobs, D.; Martin, B.

    1984-04-01

    The 3081/E project was formed to prepare a much improved IBM mainframe emulator for the future. Its design is based on a large amount of experience in using the 168/E processor to increase available CPU power in both online and offline environments. The processor will be at least equal to the execution speed of a 370/168 and up to 1.5 times faster for heavy floating point code. A single processor will thus be at least four times more powerful than the VAX 11/780, and five processors on a system would equal at least the performance of the IBM 3081K. With its large memory space and simple but flexible high speed interface, the 3081/E is well suited for the online and offline needs of high energy physics in the future.

  14. A study of electromagnetic characteristics of {sup 124,126,128,130,132,134,136}Ba isotopes performed in the framework of IBA

    SciTech Connect (OSTI)

    Turkan, N.

    2010-01-15

    It was pointed out that the level scheme of the transitional nuclei {sup 124,126,128,130,132,134,136}Ba also can be studied by both characteristics (IBM-1 and IBM-2) of the interacting boson model and an adequate point of the model leading to E2 transitions is therefore confirmed. Most of the {delta}(E2/M1) ratios that are still not known so far are stated and the set of parameters used in these calculations is the best approximation that has been carried out so far. It has turned out that the interacting boson approximation is fairly reliable for the calculation of spectra in the entire set of {sup 124,126,128,130,132,134,136}Ba isotopes.

  15. ICP-MS Data Analysis Software

    Energy Science and Technology Software Center (OSTI)

    1999-01-14

    VG2Xl - this program reads binary data files generated by VG instrumentals inductively coupled plasma-mass spectrometers using PlasmaQuad Software Version 4.2.1 and 4.2.2 running under IBM OS/2. ICPCalc - this module is a macro for Microsoft Excel written in VBA (Virtual Basic for Applications) that performs data analysis for ICP-MS data required for nuclear materials that cannot readily be done with the vendor''s software. VG2GRAMS - This program reads binary data files generated by VGmore » instruments inductively coupled plasma mass spectrometers using PlasmaQuad software versions 4.2.1 and 4.2.2 running under IBM OS/2.« less

  16. A valiant little terminal: A VLT user's manual

    SciTech Connect (OSTI)

    Weinstein, A.

    1992-08-01

    VLT came to be used at SLAC (Stanford Linear Accelerator Center), because SLAC wanted to assess the Amiga's usefulness as a color graphics terminal and T{sub E}X workstation. Before the project could really begin, the people at SLAC needed a terminal emulator which could successfully talk to the IBM 3081 (now the IBM ES9000-580) and all the VAXes on the site. Moreover, it had to compete in quality with the Ann Arbor Ambassador GXL terminals which were already in use at the laboratory. Unfortunately, at the time there was no commercial program which fit the bill. Luckily, Willy Langeveld had been independently hacking up a public domain VT100 emulator written by Dave Wecker et al. and the result, VLT, suited SLAC's purpose. Over the years, as the program was debugged and rewritten, the original code disappeared, so that now, in the present version of VLT, none of the original VT100 code remains.

  17. PRODEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP : HIGH PERFORMANCE COMPUTING WITH QCDOC AND BLUEGENE.

    SciTech Connect (OSTI)

    CHRIST,N.; DAVENPORT,J.; DENG,Y.; GARA,A.; GLIMM,J.; MAWHINNEY,R.; MCFADDEN,E.; PESKIN,A.; PULLEYBLANK,W.

    2003-03-11

    Staff of Brookhaven National Laboratory, Columbia University, IBM and the RIKEN BNL Research Center organized a one-day workshop held on February 28, 2003 at Brookhaven to promote the following goals: (1) To explore areas other than QCD applications where the QCDOC and BlueGene/L machines can be applied to good advantage, (2) To identify areas where collaboration among the sponsoring institutions can be fruitful, and (3) To expose scientists to the emerging software architecture. This workshop grew out of an informal visit last fall by BNL staff to the IBM Thomas J. Watson Research Center that resulted in a continuing dialog among participants on issues common to these two related supercomputers. The workshop was divided into three sessions, addressing the hardware and software status of each system, prospective applications, and future directions.

  18. Molecular Foundry

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Marissa Libbee Scientific Engineering Associate, NCEM mlibbee@lbl.gov 510.495.2308 Biography Marissa Libbee transitioned from the liberal arts world in 2005 and spent the next two years at the Center for Mathematics and Applied Sciences at San Joaquin Delta College where she completed her studies on electron microscopy with an emphasis on crystalline materials and biological ultra-structure. Before joining NCEM, Marissa worked for IBM Almaden on multi-layer magnetic thin films, for SanDisk with

  19. Timeline

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Timeline Timeline Date Event May 1, 2010 Account charging starts Mar 22, 2010 All active NERSC user accounts enabled Mar 17, 2010 Magellan queues added Mar 12, 2010 System accepted Feb 22, 2010 Selected NERSC user accounts enabled Jan 29, 2010 Acceptance Test Begins Jan 04, 2010 System integration begins at NERSC Oakland Scientific Facility Oct 06, 2009 Contract awarded to IBM by DOE Last edited: 2016-04-29 11:34:54

  20. Untitled

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Unsolicited Proposals Unsolicited Proposals The Department of Energy's (DOE's) central point of receipt for all Unsolicited Proposals is the National Energy Technology Laboratory (NETL) which includes all DOE Program Research Areas. http://www.netl.doe.gov/business/usp/unsol.html

    Scaling Behavior of the GFDL FMS High-Resolution Atmosphere Model on the Argonne BG/Q Platform IBM BG/Q: A Platform for Performance Discovery Page 1 Table of Contents Understanding the Scaling Behavior of the GFDL

  1. Using Globus | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Compiling & Linking Queueing & Running Jobs Data Transfer Using Globus Using GridFTP Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Using Globus Globus addresses the challenges faced by researchers in moving, sharing, and archiving large volumes of data among distributed

  2. Using HPSS | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BG/Q File Systems Disk Quota Using HPSS Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Using HPSS HPSS is a data archive and retrieval system that manages large amounts of data on disk and robotic tape libraries. It

  3. Watson Workshop

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Home » Events » HPC Workshops » Watson Workshop Watson Workshop January 21, 2016 The Data and Analytics Services group is coordinating an IBM Watson Workshop at LBL and NERSC on Thursday, January 21. Watson is an artificial intelligence system combining advanced natural language processing, machine learning, and information retrieval technologies. The workshop provides attendees with an overview of Watson and an opportunity to explore potential partnerships with the Watson team. The overview

  4. 1

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    supercomputer remains fastest in world November 18, 2008 New TOP500 list is announced IBM/LANL Roadrunner hybrid supercomputer still #1 LOS ALAMOS, New Mexico, November 18, 2008 -The latest list of the TOP500 computers in the world has been announced at the SC08 supercomputing conference in Austin, Texas, and continued to place the Roadrunner supercomputer at Los Alamos National Laboratory as fastest in the world running the LINPACK benchmark-the industry standard for measuring sustained

  5. 1950s | OSTI, US Dept of Energy, Office of Scientific and Technical

    Office of Scientific and Technical Information (OSTI)

    Information 50s To view OSTI Historical Photo Gallery, you can browse the collections below. 1940s | 1960s | 1970s | 1980s | 1990s | 2000s 1950: Remodeling Building 1950: Display 1950: Documents 1950: Group Photo 1950: IBM Punch Cards 1950: Maintenance of Kodak Film Processor 1950: Atoms for Peace Program Material 1950: Troops Train 1950: Manager 1951-1955 Armen Gregory Abdian 1950: United Nations 1950: Filing Cabinets 1950: Composition Section 1950: Geneva Conference 1950: International

  6. Speakers: Eric M. Lightner, U.S. Department of Energy William M. Gausman, Pepco Holdings, Inc.

    U.S. Energy Information Administration (EIA) Indexed Site

    8: "Smart Grid: Impacts on Electric Power Supply and Demand" Speakers: Eric M. Lightner, U.S. Department of Energy William M. Gausman, Pepco Holdings, Inc. Christian Grant, Booz & Company, Inc. F. Michael Valocchi, IBM Global Business Services [Note: Recorders did not pick up introduction of panel (see biographies for details on the panelists) or introduction of session.] Eric Lightner: Well, good morning, everybody. My name is Eric Lightner. I work at the U.S. Department of

  7. Example Program and Makefile for BG/Q | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Example Program and Makefile for BG/Q

  8. Torus Network on a BG/Q System | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Blue Gene/Q Versus Blue Gene/P BG/Q Drivers Status Machine Overview Machine Partitions Torus Network Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Torus Network on a BG/Q System Torus

  9. Overview of How to Compile and Link | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Overview of How to Compile and Link A thorough

  10. PAPI | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new

  11. carverintro.ppt

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Carver David Turner NERSC User Services Group NUG Meeting, October 18, 2010 2 Tutorial Overview * Background * Hardware * Software * Programming * Running Jobs 3 Background * Replace Bassi and Jacquard * Hardware procurement - "Scalable Units" * Two funding sources - Carver * NERSC program funds - Magellan * ARRA funds 4 System Overview * IBM iDataPlex System - 14 compute racks * 80 nodes/rack (1120 total compute nodes) * "Water cooled" - 5 service racks * Login, I/O, and

  12. gdb | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] gdb Using gdb Preliminaries You should prepare a debug version of your code: Compile using -O0 -g If you are using the XL

  13. gprof Profiling Tools | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] gprof Profiling Tools Contents Introduction Profiling on the

  14. Audit Report: OAS-L-04-22 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    2 Audit Report: OAS-L-04-22 September 22, 2004 Completion of the Terascale Simulation Facility Project PDF icon OAS-L-04-22.pdf More Documents & Publications Audit Report: OAS-M-10-02 WA_01_018_IBM_Waiver_of_Governement_US_and_Foreign_Patent_Ri.pdf WA_00_015_COMPAQ_FEDERAL_LLC_Waiver_Domestic_and_Foreign_Pat.pdf

  15. Recent developments in the theory of double beta decay

    SciTech Connect (OSTI)

    Iachello, F.; Kotila, J.; Barea, J.

    2013-12-30

    We report results of a novel calculation of phase space factors for 2??{sup +}?{sup +}, 2??{sup +}EC, 2?ECEC, 0??{sup +}?{sup +}, and 0??{sup +}EC using exact Dirac wave functions, and finite nuclear size and electron screening corrections. We present results of expected half-lives for 0??{sup +}?{sup +} and 0??{sup +}EC decays obtained by combining the calculation of phase space factors with IBM-2 nuclear matrix elements.

  16. Baseline and Target Values for PV Forecasts: Toward Improved Solar Power Forecasting: Preprint

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Baseline and Target Values for PV Forecasts: Toward Improved Solar Power Forecasting Preprint Jie Zhang 1 , Bri-Mathias Hodge 1 , Siyuan Lu 2 , Hendrik F. Hamann 2 , Brad Lehman 3 , Joseph Simmons 4 , Edwin Campos 5 , and Venkat Banunarayanan 6 1 National Renewable Energy Laboratory 2 IBM TJ Watson Research Center 3 Northeastern University 4 University of Arizona 5 Argonne National Laboratory 6 U.S. Department of Energy Presented at the IEEE Power and Energy Society General Meeting Denver,

  17. Performance Application Programming Interface

    Energy Science and Technology Software Center (OSTI)

    2005-10-31

    PAPI is a programming interface designed to provide the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events. This release covers the hardware dependent implementation of PAPI version 3 for the IBM BlueGene/L (BG/L) system.

  18. An examination of electronic file transfer between host and microcomputers for the AMPMODNET/AIMNET (Army Material Plan Modernization Network/Acquisition Information Management Network) classified network environment

    SciTech Connect (OSTI)

    Hake, K.A.

    1990-11-01

    This report presents the results of investigation and testing conducted by Oak Ridge National Laboratory (ORNL) for the Project Manager -- Acquisition Information Management (PM-AIM), and the United States Army Materiel Command Headquarters (HQ-AMC). It concerns the establishment of file transfer capabilities on the Army Materiel Plan Modernization (AMPMOD) classified computer system. The discussion provides a general context for micro-to-mainframe connectivity and focuses specifically upon two possible solutions for file transfer capabilities. The second section of this report contains a statement of the problem to be examined, a brief description of the institutional setting of the investigation, and a concise declaration of purpose. The third section lays a conceptual foundation for micro-to-mainframe connectivity and provides a more detailed description of the AMPMOD computing environment. It gives emphasis to the generalized International Business Machines, Inc. (IBM) standard of connectivity because of the predominance of this vendor in the AMPMOD computing environment. The fourth section discusses two test cases as possible solutions for file transfer. The first solution used is the IBM 3270 Control Program telecommunications and terminal emulation software. A version of this software was available on all the IBM Tempest Personal Computer 3s. The second solution used is Distributed Office Support System host electronic mail software with Personal Services/Personal Computer microcomputer e-mail software running with IBM 3270 Workstation Program for terminal emulation. Test conditions and results are presented for both test cases. The fifth section provides a summary of findings for the two possible solutions tested for AMPMOD file transfer. The report concludes with observations on current AMPMOD understanding of file transfer and includes recommendations for future consideration by the sponsor.

  19. EIA - Energy Conferences & Presentations.

    U.S. Energy Information Administration (EIA) Indexed Site

    8 EIA Conference 2010 Session 8: Smart Grid: Impacts on Electric Power Supply and Demand Moderator: Eric M. Lightner, DOE Speakers: William M. Gausman, Pepco Holdings Christian Grant, Booz & Company, Inc. Michael Valocchi, IBM Global Business Services Moderator and Speaker Biographies Eric M. Lightner, DOE Eric M. Lightner has worked as a program manager for advanced technology development at the U.S. Department of Energy for the last 20 years. Currently, Mr. Lightner is the Director of the

  20. Runjob termination | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Data Transfer Debugging & Profiling Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Runjob termination A compute-node execution is initiated by the runjob command (Blue

  1. Performing three-dimensional neutral particle transport calculations on tera scale computers

    SciTech Connect (OSTI)

    Woodward, C S; Brown, P N; Chang, B; Dorr, M R; Hanebutte, U R

    1999-01-12

    A scalable, parallel code system to perform neutral particle transport calculations in three dimensions is presented. To utilize the hyper-cluster architecture of emerging tera scale computers, the parallel code successfully combines the MPI message passing and paradigms. The code's capabilities are demonstrated by a shielding calculation containing over 14 billion unknowns. This calculation was accomplished on the IBM SP ''ASCI-Blue-Pacific computer located at Lawrence Livermore National Laboratory (LLNL).

  2. Nanoscale Imaging of Strain using X-Ray Bragg Projection Ptychography |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Argonne National Laboratory Nanoscale Imaging of Strain using X-Ray Bragg Projection Ptychography October 1, 2012 Tweet EmailPrint Users of the Center for Nanoscale Materials (CNM) from IBM exploited nanofocused X-ray Bragg projection ptychography to determine the lattice strain profile in an epitaxial SiGe stressor layer of a silicon prototype device. The theoretical and experimental framework of this new coherent diffraction strain imaging approach was developed by Argonne's Materials

  3. Sequoia supercomputer tops Graph 500 | National Nuclear Security

    National Nuclear Security Administration (NNSA)

    Administration Sequoia supercomputer tops Graph 500 Wednesday, November 19, 2014 - 11:34am Lawrence Livermore National Laboratory scientists' search for new ways to solve large complex national security problems led to the top ranking on Graph 500 and new techniques for solving large graph problems on small high performance computing (HPC) systems, all the way down to a single server. Lawrence Livermore's Sequoia supercomputer, a 20-petaflop IBM Blue Gene/Q system, achieved the world's best

  4. Organization-About-PHaSe-EFRC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    About Mission Statement (PDF) Organization Contact Us organization This webpage is provided for legacy archive purposes only, as of 30 April 2015. The day to day operations of the University of Massachusetts Amherst PHaSE EFRC are administered by co-directors. Russell is the Samuel Conte Distingushed Professor of Polymer Science and Engineering, with years of previous experience at IBM Research, over 620 publications and 21 patents for polymer chemistry and physics. Lahti has over 29 years at

  5. Cobalt Job Control | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Reservations Cobalt Job Control How to Queue a Job Running Jobs FAQs Queuing and Running on BG/Q Systems Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Cobalt Job Control The queuing system used at ALCF is Cobalt. Cobalt has two ways to queue a run: the basic method and

  6. QBox | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] QBox What is Qbox? Qbox is a C++/MPI scalable parallel implementation of first-principles molecular dynamics (FPMD) based on the plane-wave, pseudopotential

  7. Intrepid/Challenger/Surveyor | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Intrepid/Challenger/Surveyor The ALCF houses several several IBM Blue Gene/P supercomputers, one of the world's fastest computing platforms. Intrepid Intrepid has a highly scalable torus network, as well as a high-performance collective network that minimizes the bottlenecks common in simulations on large, parallel computers. Intrepid uses less

  8. Scalable computations in penetration mechanics

    SciTech Connect (OSTI)

    Kimsey, K.D.; Schraml, S.J.; Hertel, E.S.

    1998-01-01

    This paper presents an overview of an explicit message passing paradigm for an Eulerian finite volume method for modeling solid dynamics problems involving shock wave propagation, multiple materials, and large deformations. Three-dimensional simulations of high-velocity impact were conducted on the IBM SP2, the SGI Power challenge Array, and the SGI Origin 2000. The scalability of the message-passing code on distributed-memory and symmetric multiprocessor architectures is presented and compared to the ideal linear performance.

  9. EIA directory of electronic products fourth quarter 1993

    SciTech Connect (OSTI)

    Not Available

    1994-02-23

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers. For each product listed in this directory, a detailed abstract is provided which describes the data published.

  10. 2009 CNM Users Meeting | Argonne National Laboratory

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    9 CNM Users Meeting October 5-7, 2009 Full Information Available Here Meeting Summary Plenary Session Views from DOE and Washington Keynote Presentations Stephen Chou (Princeton University), "Nanostructure Engineering: A Path to Discovery and Innovation" Andreas Heinrich (IBM Almaden Research Center), "The Quantum Properties of Magnetic Nanostructures on Surfaces" User Science Highlights Focus Sessions Nanostructured Materials for Solar Energy Utilization Materials and

  11. GAMESS | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Performance Tools & APIs Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] GAMESS What Is GAMESS? The General Atomic and Molecular Electronic Structure System (GAMESS) is a general ab initio quantum chemistry package. For more information on GAMESS, see the Gordon research

  12. HPCTW | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] HPCTW Introduction HPCTW is a set of libraries that may be

  13. HPCToolkit | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] HPCToolkit References HPCToolkit Website HPCT Documentation

  14. Using VNC with a Debugger | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Debugging & Profiling Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Using VNC with a Debugger When displaying an X11 client (e.g., TotalView) remotely over the network,

  15. Software and Libraries | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System Overview Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Software and Libraries Expand All Close All Mira/Cetus Vesta

  16. System Overview | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System Overview Blue Gene/Q Versus Blue Gene/P BG/Q Drivers Status Machine Overview Machine Partitions Torus Network Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] System Overview Machine

  17. Compiling & Linking | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System Overview Data Storage & File Systems Compiling & Linking Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource.

  18. Data Storage & File Systems | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BG/Q File Systems Disk Quota Using HPSS Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Data Storage & File Systems BG/Q File Systems BG/Q File Systems: An overview of the BG/Q file systems available at ALCF. Disk

  19. Data Transfer | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Using Globus Using GridFTP Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Data Transfer The Blue Gene/Q will connect to other research institutions using a total of 100 Gbit/s of public network connectivity. This allows scientists to transfer datasets to and from other institutions

  20. Allinea DDT | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Allinea DDT References Allinea DDT Website Allinea DDT User Guide Availability You can use Allinea DDT to debug up to full

  1. BG/Q Drivers Status | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Blue Gene/Q Versus Blue Gene/P BG/Q Drivers Status Machine Overview Machine Partitions Torus Network Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] BG/Q Drivers Status The status of

  2. BG/Q File Systems | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BG/Q File Systems Disk Quota Using HPSS Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] BG/Q File Systems Vesta and Mira have discrete file systems, with two main user file systems for each machine: home and

  3. BG/Q Performance Counters | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] BG/Q Performance Counters The

  4. Validation of nuclear criticality safety software and 27 energy group ENDF/B-IV cross sections. Revision 1

    SciTech Connect (OSTI)

    Lee, B.L. Jr.; D`Aquila, D.M.

    1996-01-01

    The original validation report, POEF-T-3636, was documented in August 1994. The document was based on calculations that were executed during June through August 1992. The statistical analyses in Appendix C and Appendix D were completed in October 1993. This revision is written to clarify the margin of safety being used at Portsmouth for nuclear criticality safety calculations. This validation gives Portsmouth NCS personnel a basis for performing computerized KENO V.a calculations using the Lockheed Martin Nuclear Criticality Safety Software. The first portion of the document outlines basic information in regard to validation of NCSS using ENDF/B-IV 27-group cross sections on the IBM3090 at ORNL. A basic discussion of the NCSS system is provided, some discussion on the validation database and validation in general. Then follows a detailed description of the statistical analysis which was applied. The results of this validation indicate that the NCSS software may be used with confidence for criticality calculations at the Portsmouth Gaseous Diffusion Plant. For calculations of Portsmouth systems using the specified codes and systems covered by this validation, a maximum k{sub eff} including 2{sigma} of 0.9605 or lower shall be considered as subcritical to ensure a calculational margin of safety of 0.02. The validation of NCSS on the IBM 3090 at ORNL was extended to include NCSS on the IBM 3090 at K-25.

  5. Design and Fabrication of a Radiation-Hard 500-MHz Digitizer Using Deep Submicron Technology

    SciTech Connect (OSTI)

    K.K. Gan; M.O. Johnson; R.D. Kass; J. Moore

    2008-09-12

    The proposed International Linear Collider (ILC) will use tens of thousands of beam position monitors (BPMs) for precise beam alignment. The signal from each BPM is digitized and processed for feedback control. We proposed the development of an 11-bit (effective) digitizer with 500 MHz bandwidth and 2 G samples/s. The digitizer was somewhat beyond the state-of-the-art. Moreover we planned to design the digitizer chip using the deep-submicron technology with custom transistors that had proven to be very radiation hard (up to at least 60 Mrad). The design mitigated the need for costly shielding and long cables while providing ready access to the electronics for testing and maintenance. In FY06 as we prepared to submit a chip with test circuits and a partial ADC circuit we found that IBM had changed the availability of our chosen IC fabrication process (IBM 6HP SiGe BiCMOS), making it unaffordable for us, at roughly 3 times the previous price. This prompted us to change our design to the IBM 5HPE process with 0.35 µm feature size. We requested funding for FY07 to continue the design work and submit the first prototype chip. Unfortunately, the funding was not continued and we will summarize below the work accomplished so far.

  6. A Fault-Oblivious Extreme-Scale Execution Environment (FOX)

    SciTech Connect (OSTI)

    Van Hensbergen, Eric; Speight, William; Xenidis, Jimi

    2013-03-15

    IBM Research’s contribution to the Fault Oblivious Extreme-scale Execution Environment (FOX) revolved around three core research deliverables: ● collaboration with Boston University around the Kittyhawk cloud infrastructure which both enabled a development and deployment platform for the project team and provided a fault-injection testbed to evaluate prototypes ● operating systems research focused on exploring role-based operating system technologies through collaboration with Sandia National Labs on the NIX research operating system and collaboration with the broader IBM Research community around a hybrid operating system model which became known as FusedOS ● IBM Research also participated in an advisory capacity with the Boston University SESA project, the core of which was derived from the K42 operating system research project funded in part by DARPA’s HPCS program. Both of these contributions were built on a foundation of previous operating systems research funding by the Department of Energy’s FastOS Program. Through the course of the X-stack funding we were able to develop prototypes, deploy them on production clusters at scale, and make them available to other researchers. As newer hardware, in the form of BlueGene/Q, came online, we were able to port the prototypes to the new hardware and release the source code for the resulting prototypes as open source to the community. In addition to the open source coded for the Kittyhawk and NIX prototypes, we were able to bring the BlueGene/Q Linux patches up to a more recent kernel and contribute them for inclusion by the broader Linux community. The lasting impact of the IBM Research work on FOX can be seen in its effect on the shift of IBM’s approach to HPC operating systems from Linux and Compute Node Kernels to role-based approaches as prototyped by the NIX and FusedOS work. This impact can be seen beyond IBM in follow-on ideas being incorporated into the proposals for the Exasacale Operating Systems/Runtime program.

  7. Development of an Immersed Boundary Method to Resolve Complex Terrain in the Weather Research and Forecasting Model

    SciTech Connect (OSTI)

    Lunquist, K A; Chow, F K; Lundquist, J K; Mirocha, J D

    2007-09-04

    Flow and dispersion processes in urban areas are profoundly influenced by the presence of buildings which divert mean flow, affect surface heating and cooling, and alter the structure of turbulence in the lower atmosphere. Accurate prediction of velocity, temperature, and turbulent kinetic energy fields are necessary for determining the transport and dispersion of scalars. Correct predictions of scalar concentrations are vital in densely populated urban areas where they are used to aid in emergency response planning for accidental or intentional releases of hazardous substances. Traditionally, urban flow simulations have been performed by computational fluid dynamics (CFD) codes which can accommodate the geometric complexity inherent to urban landscapes. In these types of models the grid is aligned with the solid boundaries, and the boundary conditions are applied to the computational nodes coincident with the surface. If the CFD code uses a structured curvilinear mesh, then time-consuming manual manipulation is needed to ensure that the mesh conforms to the solid boundaries while minimizing skewness. If the CFD code uses an unstructured grid, then the solver cannot be optimized for the underlying data structure which takes an irregular form. Unstructured solvers are therefore often slower and more memory intensive than their structured counterparts. Additionally, urban-scale CFD models are often forced at lateral boundaries with idealized flow, neglecting dynamic forcing due to synoptic scale weather patterns. These CFD codes solve the incompressible Navier-Stokes equations and include limited options for representing atmospheric processes such as surface fluxes and moisture. Traditional CFD codes therefore posses several drawbacks, due to the expense of either creating the grid or solving the resulting algebraic system of equations, and due to the idealized boundary conditions and the lack of full atmospheric physics. Meso-scale atmospheric boundary layer simulations, on the other hand, are performed by numerical weather prediction (NWP) codes, which cannot handle the geometry of the urban landscape, but do provide a more complete representation of atmospheric physics. NWP codes typically use structured grids with terrain-following vertical coordinates, include a full suite of atmospheric physics parameterizations, and allow for dynamic synoptic scale lateral forcing through grid nesting. Terrain following grids are unsuitable for urban terrain, as steep terrain gradients cause extreme distortion of the computational cells. In this work, we introduce and develop an immersed boundary method (IBM) to allow the favorable properties of a numerical weather prediction code to be combined with the ability to handle complex terrain. IBM uses a non-conforming structured grid, and allows solid boundaries to pass through the computational cells. As the terrain passes through the mesh in an arbitrary manner, the main goal of the IBM is to apply the boundary condition on the interior of the domain as accurately as possible. With the implementation of the IBM, numerical weather prediction codes can be used to explicitly resolve urban terrain. Heterogeneous urban domains using the IBM can be nested into larger mesoscale domains using a terrain-following coordinate. The larger mesoscale domain provides lateral boundary conditions to the urban domain with the correct forcing, allowing seamless integration between mesoscale and urban scale models. Further discussion of the scope of this project is given by Lundquist et al. [2007]. The current paper describes the implementation of an IBM into the Weather Research and Forecasting (WRF) model, which is an open source numerical weather prediction code. The WRF model solves the non-hydrostatic compressible Navier-Stokes equations, and employs an isobaric terrain-following vertical coordinate. Many types of IB methods have been developed by researchers; a comprehensive review can be found in Mittal and Iaccarino [2005]. To the authors knowledge, this is the first IBM approach that is able to

  8. Performance assessment of OTEC power systems and thermal power plants. Final report. Volume I

    SciTech Connect (OSTI)

    Leidenfrost, W.; Liley, P.E.; McDonald, A.T.; Mudawwar, I.; Pearson, J.T.

    1985-05-01

    The focus of this report is on closed-cycle ocean thermal energy conversion (OTEC) power systems under research at Purdue University. The working operations of an OTEC power plant are briefly discussed. Methods of improving the performance of OTEC power systems are presented. Brief discussions on the methods of heat exchanger analysis and design are provided, as are the thermophysical properties of the working fluids and seawater. An interactive code capable of analyzing OTEC power system performance is included for use with an IBM personal computer.

  9. NUG Meeting February 22, 2001

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NUG Meeting February 22, 2001 Dates February 22, 2001 Location NERSC's Oakland Scientific Facility 415 20th St. [MAP] Oakland CA, 94612 NERSC's Web Site Presentations Agenda Thursday, February 22 8:00 - 8:30 Pastries and coffee available 8:30 - 8:45 Rob Ryne Introductions 8:45 - 9:30 Walt Polansky Perspectives from Washington 9:30 - 10:30 Bill Kramer Status reports: IBM SP Phase 2 plans, NERSC-4 plans, NERSC-2 decommissioning 10:30 - ... Read More » Photos Notes for Greenbook Process W.

  10. ISC2005v2.ppt

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Supercomputing: The Top Three Breakthroughs of the Last 20 Years and the Top Three Challenges for the Next 20 Years Horst Simon Associate Laboratory Director Lawrence Berkeley National Laboratory ISC 2005 Heidelberg June 22, 2005 Signpost System 1985 Cray-2 * 244 MHz (4.1 nsec) * 4 processors * 1.95 Gflop/s peak * 2 GB memory (256 MW) * 1.2 Gflop/s LINPACK R_max * 1.6 m 2 floor space * 0.2 MW power Signpost System in 2005 IBM BG/L @ LLNL * 700 MHz (x 2.86) * 65,536 nodes (x 16,384) * 180 (360)

  11. Tomographic

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    study of atomic-scale redistribution of platinum during the silicidation of Ni 0.95 Pt 0.05 / Si"100... thin films Praneet Adusumilli, 1 Lincoln J. Lauhon, 1 David N. Seidman, 1,a͒ Conal E. Murray, 2 Ori Avayu, 3 and Yossi Rosenwaks 3 1 Department of Materials Science and Engineering, Northwestern University, 2220 Campus Drive, Evanston, Illinois 60208-3108, USA 2 IBM Thomas J. Watson Research Center, Yorktown Heights, New York 10598, USA 3 School of Electrical Engineering, Tel-Aviv

  12. ORNL Cray X1 evaluation status report

    SciTech Connect (OSTI)

    Agarwal, P.K.; Alexander, R.A.; Apra, E.; Balay, S.; Bland, A.S; Colgan, J.; D'Azevedo, E.F.; Dongarra, J.J.; Dunigan Jr., T.H.; Fahey, M.R.; Fahey, R.A.; Geist, A.; Gordon, M.; Harrison, R.J.; Kaushik, D.; Krishnakumar, M.; Luszczek, P.; Mezzacappa, A.; Nichols, J.A.; Nieplocha, J.; Oliker, L.; Packwood, T.; Pindzola, M.S.; Schulthess, T.C.; Vetter, J.S.; White III, J.B.; Windus, T.L.; Worley, P.H.; Zacharia, T.

    2004-05-01

    On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of the architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the phenomenon of photon echo run 8 times faster than previously achieved. Even at 256 processors, the Cray X1 system is already outperforming other supercomputers with thousands of processors for a certain class of applications such as climate modeling and some fusion applications. This evaluation is the outcome of a number of meetings with both high-performance computing (HPC) system vendors and application experts over the past 9 months and has received broad-based support from the scientific community and other agencies.

  13. Simulation analysis of within-day flow fluctuation effects on trout below flaming Gorge Dam.

    SciTech Connect (OSTI)

    Railsback, S. F.; Hayse, J. W.; LaGory, K. E.; Environmental Science Division; EPRI

    2006-01-01

    In addition to being renewable, hydropower has the advantage of allowing rapid load-following, in that the generation rate can easily be varied within a day to match the demand for power. However, the flow fluctuations that result from load-following can be controversial, in part because they may affect downstream fish populations. At Flaming Gorge Dam, located on the Green River in northeastern Utah, concern has been raised about whether flow fluctuations caused by the dam disrupt feeding at a tailwater trout fishery, as fish move in response to flow changes and as the flow changes alter the amount or timing of the invertebrate drift that trout feed on. Western Area Power Administration (Western), which controls power production on submonthly time scales, has made several operational changes to address concerns about flow fluctuation effects on fisheries. These changes include reducing the number of daily flow peaks from two to one and operating within a restricted range of flows. These changes significantly reduce the value of the power produced at Flaming Gorge Dam and put higher load-following pressure on other power plants. Consequently, Western has great interest in understanding what benefits these restrictions provide to the fishery and whether adjusting the restrictions could provide a better tradeoff between power and non-power concerns. Directly evaluating the effects of flow fluctuations on fish populations is unfortunately difficult. Effects are expected to be relatively small, so tightly controlled experiments with large sample sizes and long study durations would be needed to evaluate them. Such experiments would be extremely expensive and would be subject to the confounding effects of uncontrollable variations in factors such as runoff and weather. Computer simulation using individual-based models (IBMs) is an alternative study approach for ecological problems that are not amenable to analysis using field studies alone. An IBM simulates how a population responds to environmental changes by representing how the population's individuals interact with their environment and each other. IBMs represent key characteristics of both individual organisms (trout, in this case) and the environment, thus allowing controlled simulation experiments to analyze the effects of changes in the key variables. For the flow fluctuation problem at Flaming Gorge Dam, the key environmental variables are flow rates and invertebrate drift concentrations, and the most important processes involve how trout adapt to changes (over space and time) in growth potential and mortality risk. This report documents simulation analyses of flow fluctuation effects on trout populations. The analyses were conducted in a highly controlled fashion: an IBM was used to predict production (survival and growth) of trout populations under a variety of scenarios that differ only in the level or type of flow fluctuation.

  14. Bridging the Gap to 64-bit Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Opteron and AMD64 A Commodity 64 bit x86 SOC Fred Weber Vice President and CTO Computation Products Group Advanced Micro Devices 22 April 2003 AMD - Salishan HPC 2003 2 Opteron/AMD64 Launch - Today! * Official Launch of AMD64 architecture and Production Server/Workstation CPUs - Series 200 (2P) available today - Series 800 (4P+) available later in Q2 * Oracle, IBM-DB2, Microsoft, RedHat, SuSe software support - And many others * Dozens of server system vendors - System builder availability this

  15. Experiences from the Roadrunner petascale hybrid systems

    SciTech Connect (OSTI)

    Kerbyson, Darren J; Pakin, Scott; Lang, Mike; Sancho Pitarch, Jose C; Davis, Kei; Barker, Kevin J; Peraza, Josh

    2010-01-01

    The combination of flexible microprocessors (AMD Opterons) with high-performing accelerators (IBM PowerXCell 8i) resulted in the extremely powerful Roadrunner system. Many challenges in both hardware and software were overcome to achieve its goals. In this talk we detail some of the experiences in achieving performance on the Roadrunner system. In particular we examine several implementations of the kernel application, Sweep3D, using a work-queue approach, a more portable Thread-building-blocks approach, and an MPI on the accelerator approach.

  16. News Item

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    4, 2015 Time: 11:00 am Speaker: Michael A. Guillorn, IBM T. J. Watson Research Center Title: Self-assembled, self-aligned and self healing: CMOS scaling enabled by stochastic suppression at the nanoscale Location: 67-3111 Chemla Room Abstract: The end of CMOS density scaling has been erroneously predicted by a number of authors for several decades. A review of some of this work was presented by Haensch, et al[1]. Many of these predictions arose from a belief that the only possible solutions to

  17. Computers for artificial intelligence a technology assessment and forecast

    SciTech Connect (OSTI)

    Miller, R.K.

    1986-01-01

    This study reviews the development and current state-of-the-art in computers for artificial intelligence, including LISP machines, AI workstations, professional and engineering workstations, minicomputers, mainframes, and supercomputers. Major computer systems for AI applications are reviewed. The use of personal computers for expert system development is discussed, and AI software for the IBM PC, Texas Instrument Professional Computer, and Apple MacIntosh is presented. Current research aimed at developing a new computer for artificial intelligence is described, and future technological developments are discussed.

  18. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema (OSTI)

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2014-06-05

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  19. ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual

    SciTech Connect (OSTI)

    Smith, A.B.; Lawson, R.D.

    1998-06-01

    The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.

  20. Featured Announcements

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    April 2013 2014 INCITE Call for Proposals - Due June 28 April 30, 2013 by Francesca Verdier The 2014 INCITE Call for Proposals is now open. Open to researchers from academia, government labs, and industry, the INCITE Program is the major means by which the scientific community gains access to the Leadership Computing Facilities' resources. INCITE is currently soliciting proposals for research on the 27-petaflops Cray XK7 "Titan" and the 10-petaflops IBM Blue Gene/Q "Mira"

  1. DOE/SF/15929-1 FSC-ESD-86-368-11 SURVIVABILITY ENHANCEMENT STUDY

    Office of Scientific and Technical Information (OSTI)

    SF/15929-1 FSC-ESD-86-368-11 SURVIVABILITY ENHANCEMENT STUDY FOR C3i/BM GROUND SEGMENTS FINAL REPORT OCTOBER 30, nm Prepared l^or: UNITED STATES DEPARTMENT OF ENERGY SAN FRANCISCO OPERATIONS OFFICE 1333 BROADWAY OAKLAND, CALIFORNIA 94612 CONTRACT NO. DE-AC03.85SF15929 DO NOT MICROFIMVl THIS PAGE AAASTER SPACE COMPANY GERMANTOWN MARYLAND 20874-1181 DISCLAIMER This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government

  2. simulators | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    simulators DOE/BC-89/3/SP. Handbook for Personal Computer Version of BOAST II: A Three- Dimensional, Three-Phase Black Oil Applied Simulation Tool. Bartlesville Project Office. January 1989. 82 pp. NTIS Order No. DE89000725. FORTRAN source code and executable program. Min. Req.: IBM PC/AT, PS-2, or compatible computer with 640 Kbytes of memory. Download 464 KB Manual 75 KB Manual 404 KB Reference paper (1033-3,v1) by Fanchi, et al. Manual 83 KB Reference paper (1033-3,v2) by Fanchi, et al. BOAST

  3. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Presentations Presentations Sort by: Default | Name | Date (low-high) | Date (high-low) | Source | Category Perspectives from Washington February 22, 2001 | Author(s): Walt Polansky | Download File: Polansky.NUGMeeting2-01.ppt | ppt | 750 KB Status reports: IBM SP Phase 2 plans, NERSC-4 plans, NERSC-2 decommissioning February 22, 2001 | Author(s): Bill Kramer | Download File: Kramer.Status.Plans.Feb2001.ppt | ppt | 6.8 MB Goals for the next Greenbook February 22, 2001 | Author(s): Doug Rotman |

  4. Early Site Permit Demonstration Program: Nuclear Power Plant Siting Database

    Energy Science and Technology Software Center (OSTI)

    1994-01-28

    This database is a repository of comprehensive licensing and technical reviews of siting regulatory processes and acceptance criteria for advanced light water reactor (ALWR) nuclear power plants. The program is designed to be used by applicants for an early site permit or combined construction permit/operating license (10CFRR522, Subparts A and C) as input for the development of the application. The database is a complete, menu-driven, self-contained package that can search and sort the supplied datamore » by topic, keyword, or other input. The software is designed for operation on IBM compatible computers with DOS.« less

  5. Machine Overview | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Overview Blue Gene/Q systems are composed of login nodes, I/O nodes, and compute nodes. Login Nodes Login and compile nodes are IBM Power 7-based systems running Red Hat Linux and are the user's interface to a Blue Gene/Q system. This is where users login, edit files, compile, and submit jobs. These are shared resources with multiple users. I/O Nodes The I/O node and compute environments are based around a very simple 1.6 GHz 16 core PowerPC A2 system with 16 GB of RAM. I/O node environments are

  6. Market Evolution: Wholesale Electricity Market Design for 21st Century Power Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    1stCenturyPower.org Technical Report NREL/TP-6A20-57477 October 2013 Contract No. DE-AC36-08GO28308 Market Evolution: Wholesale Electricity Market Design for 21 st Century Power Systems Jaquelin Cochran, Mackay Miller, Michael Milligan, Erik Ela, Douglas Arent, and Aaron Bloom National Renewable Energy Laboratory Matthew Futch IBM Juha Kiviluoma and Hannele Holtinnen VTT Technical Research Centre of Finland Antje Orths Energinet.dk Emilio Gómez-Lázaro and Sergio Martín-Martínez Universidad

  7. Hopper:Improving I/O performance to GSCRATCH and PROJECT

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    GSCRATCH/PROJECT Performance Tuning on Hopper Hopper:Improving I/O performance to GSCRATCH and PROJECT What are GSCRATCH/PROJECT? GSCRATCH and PROJECT are two file systems at NERSC that one can access on most computational systems. They are both based on the IBM GPFS file system and have multiple racks of dedicated servers and disk arrays. How are GSCRATCH/PROJECT connected to Hopper? As shown in the figure below, GSCRATCH and PROJECT are each connected to several Private NSD Servers (PNSD; for

  8. Agenda

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Agenda Agenda NUG Meeting: June 5-6, 2000 Garden Plaza Hotel, Oak Ridge, TN The next NERSC User Group meeting will be held in Oak Ridge, TN, June 5-7 and will be hosted by Oak Ridge National Laboratory (ORNL). See the agenda, below. The meeting will be all day Monday, June 5, and is expected to finish Tuesday, June 6, at lunchtime. Following this business meeting will be a training class on the new IBM SP in conjunction with Users Helping Users (UHU) talks and discussions with the consultants.

  9. EIA directory of electronic products, first quarter 1995

    SciTech Connect (OSTI)

    1995-06-01

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers. EIA, as the independent statistical and analytical branch of the Department of Energy, provides assistance to the general public through the National Energy Information Center (NEIC). For each product listed in this directory, a detailed abstract is provided which describes the data published. Specific technical questions may be referred to the appropriate contact person.

  10. EIA directory of electronic products. Third quarter 1994

    SciTech Connect (OSTI)

    Not Available

    1994-09-01

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers. EIA, as the independent statistical and analytical branch of the Department of Energy, provides assistance to the general public through the National Energy Information Center (NEIC). Inquirers may telephone NEIC`s information specialists at (202) 586-8800 with any data questions relating to the content of EIA Directory of Electronic Products.

  11. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Introduction to the NERSC HPCF (High Performance Computing Facilities) June 7, 2000 | Author(s): Thomas M. DeBoni | Download File: IntroTalk.ppt | ppt | 228 KB This talk will briefly introduce the NERSC hardware and software of the computational systems, mass storage systems, and auxiliary servers. It will also touch on matters of usage, access, and information sources. The intent is to establish a baseline of knowledge for all attendees. The IBM SP, Evolution from Phase I to Phase II June 7,

  12. Dose commitments due to radioactive releases from nuclear power plant sites: Methodology and data base. Supplement 1

    SciTech Connect (OSTI)

    Baker, D.A.

    1996-06-01

    This manual describes a dose assessment system used to estimate the population or collective dose commitments received via both airborne and waterborne pathways by persons living within a 2- to 80-kilometer region of a commercial operating power reactor for a specific year of effluent releases. Computer programs, data files, and utility routines are included which can be used in conjunction with an IBM or compatible personal computer to produce the required dose commitments and their statistical distributions. In addition, maximum individual airborne and waterborne dose commitments are estimated and compared to 10 CFR Part 50, Appendix 1, design objectives. This supplement is the last report in the NUREG/CR-2850 series.

  13. GROMACS | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    GRC Annual Meeting & GEA Geothermal Energy Expo GRC Annual Meeting & GEA Geothermal Energy Expo October 23, 2016 9:00AM EDT to October 26, 2016 5:00PM EDT GRC Annual Meeting & GEA Geothermal Energy Expo October 23-26, Sacramento, California, USA http://www.geothermal.org/meet-new.html

    Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue

  14. Scott Burrow

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scott Burrow Scott Burrow staffportrait Scott Burrow csburrow@lbl.gov Phone: (510) 486-4313 Fax: (510) 486-4316 Computational Systems Group 1 Cyclotron Road Mail Stop 943-256 Berkeley, CA 94720 Scott Burrow is a system administrator in the Computational Systems Group. Scott is currently the system lead for Carver's testbed and a backup for Carver. Prior to that Scott worked in commercial and high performance computing. Scott served as an IBM consultant on-site for NASA Ames from 2007-2009 and at

  15. Comparing the Performance of Blue Gene/Q with Leading Cray XE6 and InfiniBand Systems

    SciTech Connect (OSTI)

    Kerbyson, Darren J.; Barker, Kevin J.; Vishnu, Abhinav; Hoisie, Adolfy

    2013-01-21

    AbstractThree types of systems dominate the current High Performance Computing landscape: the Cray XE6, the IBM Blue Gene, and commodity clusters using InfiniBand. These systems have quite different characteristics making the choice for a particular deployment difficult. The XE6 uses Crays proprietary Gemini 3-D torus interconnect with two nodes at each network endpoint. The latest IBM Blue Gene/Q uses a single socket integrating processor and communication in a 5-D torus network. InfiniBand provides the flexibility of using nodes from many vendors connected in many possible topologies. The performance characteristics of each vary vastly along with their utilization model. In this work we compare the performance of these three systems using a combination of micro-benchmarks and a set of production applications. In particular we discuss the causes of variability in performance across the systems and also quantify where performance is lost using a combination of measurements and models. Our results show that significant performance can be lost in normal production operation of the Cray XT6 and InfiniBand Clusters in comparison to Blue Gene/Q.

  16. Tracking the Performance Evolution of Blue Gene Systems

    SciTech Connect (OSTI)

    Kerbyson, Darren J.; Barker, Kevin J.; Gallo, Diego S.; Chen, Dong; Brunheroto, Jose R.; Ryu, Kyung D.; Chiu, George L.; Hoisie, Adolfy

    2013-06-17

    IBMs Blue Gene supercomputer has evolved through three generations from the original Blue Gene/L to P to Q. A higher level of integration has enabled greater single-core performance, and a larger concurrency per compute node. Although these changes have brought with them a higher overall system peak-performance, no study has examined in detail the evolution of perfor-mance across system generations. In this work we make two significant contri-butions that of providing a comparative performance analysis across Blue Gene generations using a consistent set of tests, and also in providing a validat-ed performance model of the NEK-Bone proxy application. The combination of empirical analysis and the predictive performance model enable us to not only directly compare measured performance but also allow for a comparison of sys-tem configurations that cannot currently be measured. We provide insights into how the changing characteristics of Blue Gene have impacted on the application performance, as well as what future systems may be able to achieve.

  17. Quadrupole collective dynamics from energy density functionals: Collective Hamiltonian and the interacting boson model

    SciTech Connect (OSTI)

    Nomura, K.; Vretenar, D.; Niksic, T.; Otsuka, T.; Shimizu, N.

    2011-07-15

    Microscopic energy density functionals have become a standard tool for nuclear structure calculations, providing an accurate global description of nuclear ground states and collective excitations. For spectroscopic applications, this framework has to be extended to account for collective correlations related to restoration of symmetries broken by the static mean field, and for fluctuations of collective variables. In this paper, we compare two approaches to five-dimensional quadrupole dynamics: the collective Hamiltonian for quadrupole vibrations and rotations and the interacting boson model (IBM). The two models are compared in a study of the evolution of nonaxial shapes in Pt isotopes. Starting from the binding energy surfaces of {sup 192,194,196}Pt, calculated with a microscopic energy density functional, we analyze the resulting low-energy collective spectra obtained from the collective Hamiltonian, and the corresponding IBM Hamiltonian. The calculated excitation spectra and transition probabilities for the ground-state bands and the {gamma}-vibration bands are compared to the corresponding sequences of experimental states.

  18. Automation and optimization of the design parameters in tactical military pipeline systems. Master's thesis

    SciTech Connect (OSTI)

    Frick, R.M.

    1988-12-01

    Tactical military petroleum pipeline systems will play a vital role in any future conflict due to an increased consumption of petroleum products by our combined Armed Forces. The tactical pipeline must be rapidly constructed and highly mobile to keep pace with the constantly changing battle zone. Currently, the design of these pipeline system is time consuming and inefficient, which may cause shortages of fuel and pipeline components at the front lines. Therefore, a need for a computer program that will both automate and optimize the pipeline design process is quite apparent. These design needs are satisfied by developing a software package using Advance Basic (IBM DOS) programming language and made to run on an IBM-compatible personal computer. The program affords the user the options of either finding the optimum pump station locations for a proposed pipeline or calculating the maximum operating pressures for an existing pipeline. By automating the design procedure, a field engineer can vary the pipeline length, diameter, roughness, viscosity, gravity, flow rate, pump station pressure, or terrain profile and see how it affects the other parameters in just a few seconds. The design process was optimized by implementing a weighting scheme based on the volume percent of each fuel in the pipeline at any given time.

  19. A valiant little terminal: A VLT user`s manual. Revision 4

    SciTech Connect (OSTI)

    Weinstein, A.

    1992-08-01

    VLT came to be used at SLAC (Stanford Linear Accelerator Center), because SLAC wanted to assess the Amiga`s usefulness as a color graphics terminal and T{sub E}X workstation. Before the project could really begin, the people at SLAC needed a terminal emulator which could successfully talk to the IBM 3081 (now the IBM ES9000-580) and all the VAXes on the site. Moreover, it had to compete in quality with the Ann Arbor Ambassador GXL terminals which were already in use at the laboratory. Unfortunately, at the time there was no commercial program which fit the bill. Luckily, Willy Langeveld had been independently hacking up a public domain VT100 emulator written by Dave Wecker et al. and the result, VLT, suited SLAC`s purpose. Over the years, as the program was debugged and rewritten, the original code disappeared, so that now, in the present version of VLT, none of the original VT100 code remains.

  20. Integrated Air Pollution Control System (IAPCS), Executable Model and Source Model (version 4. 0) (for microcomputers). Model-Simulation

    SciTech Connect (OSTI)

    Not Available

    1990-10-29

    The Integrated Air Pollution Control System (IAPCS) Cost Model is an IBM PC cost model that can be used to estimate the cost of installing SO2, NOx, and particulate matter control systems at coal-fired utility electric generating facilities. The model integrates various combinations of the following technologies: physical coal cleaning, coal switching, overfire air/low NOx burners, natural gas reburning, LIMB, ADVACATE, electrostatic precipitator, fabric filter, gas conditioning, wet lime or limestone FGD, lime spray drying/duct spray drying, dry sorbent injection, pressurized fluidized bed combustion, integrated gasification combined cycle, and pulverized coal burning boiler. The model generates capital, annualized, and unitized pollutant removal costs in either constant or current dollars for any year.

  1. Integrated Air Pollution Control System (IAPCS), Executable Model (Version 4. 0) (for microcomputers). Model-Simulation

    SciTech Connect (OSTI)

    Not Available

    1990-10-29

    The Integrated Air Pollution Control System (IAPCS) Cost Model is an IBM PC cost model that can be used to estimate the cost of installing SO2, NOx, and particulate matter control systems at coal-fired utility electric generating facilities. The model integrates various combinations of the following technologies: physical coal cleaning, coal switching, overfire air/low NOx burners, natural gas reburning, LIMB, ADVACATE, electrostatic precipitator, fabric filter, gas conditioning, wet lime or limestone FGD, lime spray drying/duct spray drying, dry sorbent injection, pressurized fluidized bed combustion, integrated gasification combined cycle, and pulverized coal burning boiler. The model generates capital, annualized, and unitized pollutant removal costs in either constant or current dollars for any year.

  2. A brief summary on formalizing parallel tensor distributions redistributions and algorithm derivations.

    SciTech Connect (OSTI)

    Schatz, Martin D.; Kolda, Tamara G.; van de Geijn, Robert

    2015-09-01

    Large-scale datasets in computational chemistry typically require distributed-memory parallel methods to perform a special operation known as tensor contraction. Tensors are multidimensional arrays, and a tensor contraction is akin to matrix multiplication with special types of permutations. Creating an efficient algorithm and optimized im- plementation in this domain is complex, tedious, and error-prone. To address this, we develop a notation to express data distributions so that we can apply use automated methods to find optimized implementations for tensor contractions. We consider the spin-adapted coupled cluster singles and doubles method from computational chemistry and use our methodology to produce an efficient implementation. Experiments per- formed on the IBM Blue Gene/Q and Cray XC30 demonstrate impact both improved performance and reduced memory consumption.

  3. P

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    etascale C ompu6ng This w ork p erformed u nder t he a uspices o f t he U .S. D epartment o f E nergy b y Lawrence L ivermore N a6onal L aboratory u nder C ontract D E--- AC52---07NA27344. Donald F rederick, L ivermore C ompu6ng, L awrence L ivermore Na6onal L aboratory LLNL---PRES---508651 IBM B lue G ene A rchitecture LLNL---PRES---508651 Outline * Overview o f B lueGene * BG Philosophy * The B G F amily * BG h ardware - System O verview - CPU - Node - Interconnect * BG S ystem S oVware * BG S

  4. Challenges of Algebraic Multigrid across Multicore Architectures

    SciTech Connect (OSTI)

    Baker, A H; Gamblin, T; Schulz, M; Yang, U M

    2010-04-12

    Algebraic multigrid (AMG) is a popular solver for large-scale scientific computing and an essential component of many simulation codes. AMG has shown to be extremely efficient on distributed-memory architectures. However, when executed on modern multicore architectures, we face new challenges that can significantly deteriorate AMG's performance. We examine its performance and scalability on three disparate multicore architectures: a cluster with four AMD Opteron Quad-core processors per node (Hera), a Cray XT5 with two AMD Opteron Hex-core processors per node (Jaguar), and an IBM BlueGene/P system with a single Quad-core processor (Intrepid). We discuss our experiences on these platforms and present results using both an MPI-only and a hybrid MPI/OpenMP model. We also discuss a set of techniques that helped to overcome the associated problems, including thread and process pinning and correct memory associations.

  5. 17th Edition of TOP500 List of World's Fastest SupercomputersReseased

    SciTech Connect (OSTI)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack J.; Simon,Horst D.

    2001-06-21

    17th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, GERMANY; KNOXVILLE, TENN.; BERKELEY, CALIF. In what has become a much-anticipated event in the world of high-performance computing, the 17th edition of the TOP500 list of the world's fastest supercomputers was released today (June 21). The latest edition of the twice-yearly ranking finds IBM as the leader in the field, with 40 percent in terms of installed systems and 43 percent in terms of total performance of all the installed systems. In second place in terms of installed systems is Sun Microsystems with 16 percent, while Cray Inc. retained second place in terms of performance (13 percent). SGI Inc. was third both with respect to systems with 63 (12.6 percent) and performance (10.2 percent).

  6. Application of bar codes to the automation of analytical sample data collection

    SciTech Connect (OSTI)

    Jurgensen, H A

    1986-01-01

    The Health Protection Department at the Savannah River Plant collects 500 urine samples per day for tritium analyses. Prior to automation, all sample information was compiled manually. Bar code technology was chosen for automating this program because it provides a more accurate, efficient, and inexpensive method for data entry. The system has three major functions: sample labeling is accomplished at remote bar code label stations composed of an Intermec 8220 (Intermec Corp.) interfaced to an IBM-PC, data collection is done on a central VAX 11/730 (Digital Equipment Corp.). Bar code readers are used to log-in samples to be analyzed on liquid scintillation counters. The VAX 11/730 processes the data and generates reports, data storage is on the VAX 11/730 and backed up on the plant's central computer. A brief description of several other bar code applications at the Savannah River Plant is also presented.

  7. Automated system for handling tritiated mixed waste

    SciTech Connect (OSTI)

    Dennison, D.K.; Merrill, R.D.; Reitz, T.C.

    1995-03-01

    Lawrence Livermore National Laboratory (LLNL) is developing a semi system for handling, characterizing, processing, sorting, and repackaging hazardous wastes containing tritium. The system combines an IBM-developed gantry robot with a special glove box enclosure designed to protect operators and minimize the potential release of tritium to the atmosphere. All hazardous waste handling and processing will be performed remotely, using the robot in a teleoperational mode for one-of-a-kind functions and in an autonomous mode for repetitive operations. Initially, this system will be used in conjunction with a portable gas system designed to capture any gaseous-phase tritium released into the glove box. This paper presents the objectives of this development program, provides background related to LLNL`s robotics and waste handling program, describes the major system components, outlines system operation, and discusses current status and plans.

  8. Testing of the Eberline PCM-2

    SciTech Connect (OSTI)

    Howe, K.L.

    1994-12-23

    The PCM-2 manufactured by Eberline Instruments is a whole body monitor that detects both alpha and beta contamination. The PCM-2 uses an IBM compatible personal computer for all software functions. The PCM-2 has 34 large area detectors which can cover approximately 40% of the body at a time. This requires two counting cycles to cover approximately 80% of the body. With the normal background seen at Rocky Flats, each count time takes approximately 15--20 seconds. There are a number of beta and gamma whole body monitors available from different manufacturers, but an alpha whole body monitor is a rarity. Because of the need for alpha whole body monitors at The Rocky Flats Environmental Technology Site, it was decided to do thorough testing on the PCM-2. A three month test was run in uranium building and a three month test in a plutonium building to verify the alpha capabilities of the PCM-2.

  9. Trainblazing with roadrunner

    SciTech Connect (OSTI)

    Henning, Paul J; White, Andrew B

    2009-01-01

    In June 2008, a new supercomputer broke the petaflop/s performance barrier, more than doubling the computational performance of the next fastest machine on the TopSOO Supercomputing Sites list (http://topSOO.org).This computer, named Roadrunner, is the result of an intensive collaboration between IBM and Los Alamos National Laboratory, where it is now located. Aside from its performance, Roadrunner has two distinguishing characteristics: a very good power/performance ratio and a 'hybrid' computer architecture that mixes several types of processors. By November 2008, the traditionally-architected Jaguar computer at Oak Ridge National Laboratory was neck-and-neck with Roadrunner in the performance race, but it requires almost 2.8 times the electric power of Roadrunner. This difference translates into millions of dollars per year in operating costs.

  10. Performance Tuning of Fock Matrix and Two-Electron Integral Calculations for NWChem on Leading HPC Platforms

    SciTech Connect (OSTI)

    Shan, Hongzhan; Austin, Brian M.; De Jong, Wibe A.; Oliker, Leonid; Wright, Nicholas J.; Apra, Edoardo

    2014-10-01

    Attaining performance in the evaluation of two-electron repulsion integrals and constructing the Fock matrix is of considerable importance to the computational chemistry community. Due to its numerical complexity improving the performance behavior across a variety of leading supercomputing platforms is an increasing challenge due to the significant diversity in high-performance computing architectures. In this paper, we present our successful tuning methodology for these important numerical methods on the Cray XE6, the Cray XC30, the IBM BG/Q, as well as the Intel Xeon Phi. Our optimization schemes leverage key architectural features including vectorization and simultaneous multithreading, and results in speedups of up to 2.5x compared with the original implementation.

  11. Simple Electric Vehicle Simulation

    Energy Science and Technology Software Center (OSTI)

    1993-07-29

    SIMPLEV2.0 is an electric vehicle simulation code which can be used with any IBM compatible personal computer. This general purpose simulation program is useful for performing parametric studies of electric and series hybrid electric vehicle performance on user input driving cycles.. The program is run interactively and guides the user through all of the necessary inputs. Driveline components and the traction battery are described and defined by ASCII files which may be customized by themore » user. Scaling of these components is also possible. Detailed simulation results are plotted on the PC monitor and may also be printed on a printer attached to the PC.« less

  12. Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR): Programmer's guide

    SciTech Connect (OSTI)

    Call, O. J.; Jacobson, J. A.

    1988-09-01

    The Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) is an automated data base management system for processing and storing human error probability and hardware component failure data. The NUCLARR system software resides on an IBM (or compatible) personal micro-computer and can be used to furnish data inputs for both human and hardware reliability analysis in support of a variety of risk assessment activities. The NUCLARR system is documented in a five-volume series of reports. Volume 2 of this series is the Programmer's Guide for maintaining the NUCLARR system software. This Programmer's Guide provides, for the software engineer, an orientation to the software elements involved, discusses maintenance methods, and presents useful aids and examples. 4 refs., 75 figs., 1 tab.

  13. The NUCLARR databank: Human reliability and hardware failure data for the nuclear power industry

    SciTech Connect (OSTI)

    Reece, W.J.

    1993-05-01

    Under the sponsorship of the US Nuclear Regulatory Commission (NRC), the Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) was developed to provide human reliability and hardware failure data to analysts in the nuclear power industry. This IBM-compatible databank is contained on a set of floppy diskettes which include data files and a menu-driven system for locating, reviewing, sorting, and retrieving the data. NUCLARR contains over 2500 individual data records, drawn from more, than 60 sources. The system is upgraded annually, to include additional human error and hardware component failure data and programming enhancements (i.e., increased user-friendliness). NUCLARR is available from the NRC through project staff at the INEL.

  14. Parallel Scaling Characteristics of Selected NERSC User ProjectCodes

    SciTech Connect (OSTI)

    Skinner, David; Verdier, Francesca; Anand, Harsh; Carter,Jonathan; Durst, Mark; Gerber, Richard

    2005-03-05

    This report documents parallel scaling characteristics of NERSC user project codes between Fiscal Year 2003 and the first half of Fiscal Year 2004 (Oct 2002-March 2004). The codes analyzed cover 60% of all the CPU hours delivered during that time frame on seaborg, a 6080 CPU IBM SP and the largest parallel computer at NERSC. The scale in terms of concurrency and problem size of the workload is analyzed. Drawing on batch queue logs, performance data and feedback from researchers we detail the motivations, benefits, and challenges of implementing highly parallel scientific codes on current NERSC High Performance Computing systems. An evaluation and outlook of the NERSC workload for Allocation Year 2005 is presented.

  15. LAMMPS strong scaling performance optimization on Blue Gene/Q

    SciTech Connect (OSTI)

    Coffman, Paul; Jiang, Wei; Romero, Nichols A.

    2014-11-12

    LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using an 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.

  16. A mobile computed tomographic unit for inspecting reinforced concrete columns

    SciTech Connect (OSTI)

    Sumitra, T.; Srisatit, S.; Pattarasumunt, A.

    1994-12-31

    A mobile computed tomographic unit applicable in the inspection of reinforced concrete columns was designed, constructed and tested. A CT image reconstruction programme written in Quick Basic was first developed to be used on an IBM PC/AT microcomputer. It provided user friendly menus for data processing and displaying CT image. The prototype of a gamma-ray scanning system using a 1.11 GBq Cs-137 source and a NaI(T1) scintillation detector was also designed and constructed. The system was a microcomputer controlled, single-beam rotate-translate scanner used for collecting transmitted gamma-ray data in different angles. The CT unit was finally tested with a standard column and a column of an existing building. The cross sectional images of the columns could be clearly seen. The positions and sizes of the reinforced bars could be estimated.

  17. Quantum Monte Carlo by message passing

    SciTech Connect (OSTI)

    Bonca, J.; Gubernatis, J.E.

    1993-01-01

    We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green's function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.

  18. Quantum Monte Carlo by message passing

    SciTech Connect (OSTI)

    Bonca, J.; Gubernatis, J.E.

    1993-05-01

    We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green`s function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.

  19. Agenda

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Agenda Agenda Thursday, February 22 8:00 - 8:30 Pastries and coffee available 8:30 - 8:45 Rob Ryne Introductions 8:45 - 9:30 Walt Polansky Perspectives from Washington 9:30 - 10:30 Bill Kramer Status reports: IBM SP Phase 2 plans, NERSC-4 plans, NERSC-2 decommissioning 10:30 - 10:45 Break 10:45 - 11:15 Bill Kramer Lessons Learned from the last Greenbook 11:15 - 11:45 Doug Rotman Goals for the next Greenbook 11:45 - 12:15 Tour of the Oakland Facility 12:15 - 1:45 Lunch 1:45 - 2:30 Mike Minkoff

  20. Computer Algebra System

    Energy Science and Technology Software Center (OSTI)

    1992-05-04

    DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less

  1. PC Basic Linear Algebra Subroutines

    Energy Science and Technology Software Center (OSTI)

    1992-03-09

    PC-BLAS is a highly optimized version of the Basic Linear Algebra Subprograms (BLAS), a standardized set of thirty-eight routines that perform low-level operations on vectors of numbers in single and double-precision real and complex arithmetic. Routines are included to find the index of the largest component of a vector, apply a Givens or modified Givens rotation, multiply a vector by a constant, determine the Euclidean length, perform a dot product, swap and copy vectors, andmore » find the norm of a vector. The BLAS have been carefully written to minimize numerical problems such as loss of precision and underflow and are designed so that the computation is independent of the interface with the calling program. This independence is achieved through judicious use of Assembly language macros. Interfaces are provided for Lahey Fortran 77, Microsoft Fortran 77, and Ryan-McFarland IBM Professional Fortran.« less

  2. EIA directory of electronic products. Second quarter 1995

    SciTech Connect (OSTI)

    1995-10-04

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. They are available to the public on magnetic tapes; selected data files/models are available on diskette for IBM-compatible personal computers. This directory first presents the on-line files and compact discs. This is followed by descriptions and technical contacts and ordering and other information on the data files and models. An index by energy source is included. Additional ordering information is in the preface. The data files cover petroleum, natural gas, electricity, coal, integrated statistics, and consumption; the models cover petroleum, natural gas, electricity, coal, nuclear, and multifuel.

  3. Modeling and simulation of Red Teaming. Part 1, Why Red Team M&S?

    SciTech Connect (OSTI)

    Skroch, Michael J.

    2009-11-01

    Red teams that address complex systems have rarely taken advantage of Modeling and Simulation (M&S) in a way that reproduces most or all of a red-blue team exchange within a computer. Chess programs, starting with IBM's Deep Blue, outperform humans in that red-blue interaction, so why shouldn't we think computers can outperform traditional red teams now or in the future? This and future position papers will explore possible ways to use M&S to augment or replace traditional red teams in some situations, the features Red Team M&S should possess, how one might connect live and simulated red teams, and existing tools in this domain.

  4. A BLAS-3 version of the QR factorization with column pivoting

    SciTech Connect (OSTI)

    Quintana-Orti, G.; Sun, X.; Bischof, C.H.

    1998-09-01

    The QR factorization with column pivoting (QRP), originally suggested by Golub is a popular approach to computing rank-revealing factorizations. Using Level 1 BLAS, it was implemented in LINPACK, and, using Level 2 BLAS, in LAPACK. While the Level 2 BLAS version delivers superior performance in general, it may result in worse performance for large matrix sizes due to cache effects. The authors introduce a modification of the QRP algorithm which allows the use of Level 3 BLAs kernels while maintaining the numerical behavior of the LINPACK and LAPACK implementations. Experimental comparisons of this approach with the LINPACK and LAPACK implementations on IBM RS/6000, SGI R8000, and DEC AXP platforms show considerable performance improvements.

  5. Polymer Hybrid Photovoltaics for Inexpensive Electricity Generation: Final Technical Report, 1 September 2001--30 April 2006

    SciTech Connect (OSTI)

    Carter, S. A.

    2006-07-01

    The project goal is to understand the operating mechanisms underlying the performance of polymer hybrid photovoltaics to enable the development of a photovoltaic with a maximum power conversion efficiency over cost ratio that is significantly greater than current PV technologies. Plastic or polymer-based photovoltaics can have significant cost advantages over conventional technologies in that they are compatible with liquid-based plastic processing and can be assembled onto plastic under atmospheric conditions (ambient temperature and pressure) using standard printing technologies, such as reel-to-reel and screen printing. Moreover, polymer-based PVs are lightweight, flexible, and largely unbreakable, which make shipping, installation, and maintenance simpler. Furthermore, a numerical simulation program was developed (in collaboration with IBM) to fully simulate the performance of multicomponent polymer photovoltaic devices, and a manufacturing method was developed (in collaboration with Add-vision) to inexpensively manufacture larger-area devices.

  6. TOP500 Supercomputers for June 2002

    SciTech Connect (OSTI)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  7. Simulation of oil-slick transport in Great Lakes connecting channels. User's manual for the River Spill Simulation Model (ROSS). Special report

    SciTech Connect (OSTI)

    Shen, H.T.; Yapa, P.D.; Petroski, M.E.

    1991-12-01

    Two computer models, named ROSS and LROSS, have been developed for simulating oil slick transport in rivers and lakes, respectively. The oil slick transformation processes considered in these models include advection, spreading, evaporation and dissolution. These models can be used for slicks of any shape originating from instantaneous or continuous spills in rivers and lakes with or without ice covers. Although developed for the connecting channels in the upper Great Lakes, including the Detroit River, Lake St. Clair, the St. Clair River and the St. Marys River, these models are site independent and can be used for other rivers and lakes. The programs are written in FORTRAN programming language to be compatible with the FORTRAN77 compiler. In addition, a user-friendly, menu-driven program with graphics capability was developed for the IBM-PC AT computer, so that these models can be easily used to assist the cleanup action in the connecting channels should an oil spill occur.

  8. PCDAS Version 2. 2: Remote network control and data acquisition

    SciTech Connect (OSTI)

    Fishbaugher, M.J.

    1987-09-01

    This manual is intended for both technical and non-technical people who want to use the PCDAS remote network control and data acquisition software. If you are unfamiliar with remote data collection hardware systems designed at Pacific Northwest Laboratory (PNL), this introduction should answer your basic questions. Even if you have some experience with the PNL-designed Field Data Acquisition Systems (FDAS), it would be wise to review this material before attempting to set up a network. This manual was written based on the assumption that you have a rudimentary understanding of personal computer (PC) operations using Disk Operating System (DOS) version 2.0 or greater (IBM 1984). You should know how to create subdirectories and get around the subdirectory tree.

  9. Waterflooding in a system of horizontal wells

    SciTech Connect (OSTI)

    Bedrikovetsky, P.G.; Magarshak, T.O.; Shapiro, A.A.

    1995-10-01

    An approximate analytical method for the simulation of waterflooding in a system of horizontal wells is developed. The method is based on an advanced stream-line concept. The essence of this new method is the exact solution for the 3D two-phase flow problem in the system of coordinates linked with the stream lines under the only assumption of the immobility of stream lines. A software based on this approach was developed for IBM-compatible PC. It allows one multivariant comparative studies of immiscible displacement in systems of horizontal, vertical and slant wells. The simulator has been used in order to optimize geometrical parameters of a regular well system and to predict recovery in conditions of Prirazlomnoye offshore oil field.

  10. User's guide to a data base of current environmental monitoring projects in the US-Canadian transboundary region

    SciTech Connect (OSTI)

    Ballinger, M.Y.; Defferding, J.; Chapman, E.G.; Bettinson, M.D.; Glantz, C.S.

    1987-11-01

    This document describes how to use a data base of current transboundary region environmental monitoring projects. The data base was prepared from data provided by Glantz et al. (1986) and Concord Scientific Corporation (1985), and contains information on 226 projects with monitoring stations located within 400 km (250 mi) of the US-Canadian border. The data base is designed for use with the dBASE III PLUS data management systems on IBM-compatible personal computers. Data-base searches are best accomplished using an accompanying command file called RETRIEVE or the dBASE command LIST. The user must carefully select the substrings on which the search is to be based. Example search requests and subsequent output are presented to illustrate substring selections and applications of the data base. 4 refs., 15 figs., 4 tabs.

  11. SIMPLEV: A simple electric vehicle simulation program, Version 1.0

    SciTech Connect (OSTI)

    Cole, G.H.

    1991-06-01

    An electric vehicle simulation code which can be used with any IBM compatible personal computer was written. This general purpose simulation program is useful for performing parametric studies of electric vehicle performance on user input driving cycles. The program is run interactively and guides the user through all of the necessary inputs. Driveline components and the traction battery are described and defined by ASCII files which may be customized by the user. Scaling of these components is also possible. Detailed simulation results are plotted on the PC monitor and may also be printed on a printer attached to the PC. This report serves as a users` manual and documents the mathematical relationships used in the simulation.

  12. Petascale Parallelization of the Gyrokinetic Toroidal Code

    SciTech Connect (OSTI)

    Ethier, Stephane; Adams, Mark; Carter, Jonathan; Oliker, Leonid

    2010-05-01

    The Gyrokinetic Toroidal Code (GTC) is a global, three-dimensional particle-in-cell application developed to study microturbulence in tokamak fusion devices. The global capability of GTC is unique, allowing researchers to systematically analyze important dynamics such as turbulence spreading. In this work we examine a new radial domain decomposition approach to allow scalability onto the latest generation of petascale systems. Extensive performance evaluation is conducted on three high performance computing systems: the IBM BG/P, the Cray XT4, and an Intel Xeon Cluster. Overall results show that the radial decomposition approach dramatically increases scalability, while reducing the memory footprint - allowing for fusion device simulations at an unprecedented scale. After a decade where high-end computing (HEC) was dominated by the rapid pace of improvements to processor frequencies, the performance of next-generation supercomputers is increasingly differentiated by varying interconnect designs and levels of integration. Understanding the tradeoffs of these system designs is a key step towards making effective petascale computing a reality. In this work, we examine a new parallelization scheme for the Gyrokinetic Toroidal Code (GTC) [?] micro-turbulence fusion application. Extensive scalability results and analysis are presented on three HEC systems: the IBM BlueGene/P (BG/P) at Argonne National Laboratory, the Cray XT4 at Lawrence Berkeley National Laboratory, and an Intel Xeon cluster at Lawrence Livermore National Laboratory. Overall results indicate that the new radial decomposition approach successfully attains unprecedented scalability to 131,072 BG/P cores by overcoming the memory limitations of the previous approach. The new version is well suited to utilize emerging petascale resources to access new regimes of physical phenomena.

  13. Health Physics Positions Data Base: Revision 1

    SciTech Connect (OSTI)

    Kerr, G.D.; Borges, T.; Stafford, R.S.; Lu, P.Y.; Carter, D.

    1994-02-01

    The Health Physics Positions (HPPOS) Data Base of the Nuclear Regulatory Commission (NRC) is a collection of NRC staff positions on a wide range of topics involving radiation protection (health physics). It consists of 328 documents in the form of letters, memoranda, and excerpts from technical reports. The HPPOS Data Base was developed by the NRC Headquarters and Regional Offices to help ensure uniformity in inspections, enforcement, and licensing actions. Staff members of the Oak Ridge National Laboratory (ORNL) have assisted the NRC staff in summarizing the documents during the preparation of this NUREG report. These summaries are also being made available as a {open_quotes}stand alone{close_quotes} software package for IBM and IBM-compatible personal computers. The software package for this report is called HPPOS Version 2.0. A variety of indexing schemes were used to increase the usefulness of the NUREG report and its associated software. The software package and the summaries in the report are written in the context of the {open_quotes}new{close_quotes} 10 CFR Part 20 ({section}{section}20.1001--20.2401). The purpose of this NUREG report is to allow interested individuals to familiarize themselves with the contents of the HPPOS Data Base and with the basis of many NRC decisions and regulations. The HPPOS summaries and original documents are intended to serve as a source of information for radiation protection programs at nuclear research and power reactors, nuclear medicine, and other industries that either process or use nuclear materials.

  14. Data Foundry: Data Warehousing and Integration for Scientific Data Management

    SciTech Connect (OSTI)

    Musick, R.; Critchlow, T.; Ganesh, M.; Fidelis, Z.; Zemla, A.; Slezak, T.

    2000-02-29

    Data warehousing is an approach for managing data from multiple sources by representing them with a single, coherent point of view. Commercial data warehousing products have been produced by companies such as RebBrick, IBM, Brio, Andyne, Ardent, NCR, Information Advantage, Informatica, and others. Other companies have chosen to develop their own in-house data warehousing solution using relational databases, such as those sold by Oracle, IBM, Informix and Sybase. The typical approaches include federated systems, and mediated data warehouses, each of which, to some extent, makes use of a series of source-specific wrapper and mediator layers to integrate the data into a consistent format which is then presented to users as a single virtual data store. These approaches are successful when applied to traditional business data because the data format used by the individual data sources tends to be rather static. Therefore, once a data source has been integrated into a data warehouse, there is relatively little work required to maintain that connection. However, that is not the case for all data sources. Data sources from scientific domains tend to regularly change their data model, format and interface. This is problematic because each change requires the warehouse administrator to update the wrapper, mediator, and warehouse interfaces to properly read, interpret, and represent the modified data source. Furthermore, the data that scientists require to carry out research is continuously changing as their understanding of a research question develops, or as their research objectives evolve. The difficulty and cost of these updates effectively limits the number of sources that can be integrated into a single data warehouse, or makes an approach based on warehousing too expensive to consider.

  15. Project Report on DOE Young Investigator Grant (Contract No. DE-FG02-02ER25525) Dynamic Scheduling and Fusion of Irregular Computation (August 15, 2002 to August 14, 2005)

    SciTech Connect (OSTI)

    Chen Ding

    2005-08-16

    Computer simulation has become increasingly important in many scientific disciplines, but its performance and scalability are severely limited by the memory throughput on today??s computer systems. With the support of this grant, we first designed training-based prediction, which accurately predicts the memory performance of large applications before their execution. Then we developed optimization techniques using dynamic computation fusion and large-scale data transformation. The research work has three major components. The first is modeling and prediction of cache behav- ior. We have developed a new technique, which uses reuse distance information from training inputs then extracts a parameterized model of the program??s cache miss rates for any input size and for any size of fully associative cache. Using the model we have built a web-based tool using three dimensional visualization. The new model can help to build cost-effective computer systems, design better benchmark suites, and improve task scheduling on heterogeneous systems. The second component is global computation for improving cache performance. We have developed an algorithm for dynamic data partitioning using sampling theory and probability distribution. Recent work from a number of groups show that manual or semi-manual computation fusion has significant benefits in physical, mechanical, and biological simulations as well as information retrieval and machine verification. We have developed an au- tomatic tool that measures the potential of computation fusion. The new system can be used by high-performance application programmers to estimate the potential of locality improvement for a program before trying complex transformations for a specific cache system. The last component studies models of spatial locality and the problem of data layout. In scientific programs, most data are stored in arrays. Grand challenge problems such as hydrodynamics simulation and data mining may use an enormous number of data elements. To optimize the layout across multiple arrays, we have developed a formal model called reference affinity. We collaborated with the IBM production compiler group and designed an efficient compiler analysis that performs as well as data or code profiling does. Based on these results, the IBM group has filed a patent and is including this technique in their product compiler. A major part of the project is the development of software tools. We have developed web-based visu- alization for program locality. In addition, we have implemented a prototype of array regrouping in the IBM compiler. The full implementation is expected to come out of IBM in the near future and to benefit scientific applications running on IBM supercomputers. We have also developed a test environment for studying the limit of computation fusion. Finally, our work has directly in?uenced the design of the Intel Itanium compiler. The project has strengthened the research relation between the PI??s group and groups in DoE labs. The PI was an invited speaker at the Center for Applied Scientific Computing Seminar Series at the early stage of the project. The question that the most audience was curious about was the limit of computation fusion, which has been studied in depth in this research. In addition, the seminar directly helped a group at Lawrence Livermore to achieve four times speedup on an important DoE code. The PI helped to organize a number of high-performance computing forums, including the founding of a workshop on memory system performance (MSP). In the past two years, one fourth of the papers in the workshop came from researchers in Lawrence Livermore, Argonne, Las Alamos, and Lawrence Berkeley national laboratories. The PI lectured frequently on DoE funded research. In a broader context, high performance computing is central to America??s scientific and economic stature in the world,

  16. Study of Particle Rotation Effect in Gas-Solid Flows using Direct Numerical Simulation with a Lattice Boltzmann Method

    SciTech Connect (OSTI)

    Kwon, Kyung; Fan, Liang-Shih; Zhou, Qiang; Yang, Hui

    2014-09-30

    A new and efficient direct numerical method with second-order convergence accuracy was developed for fully resolved simulations of incompressible viscous flows laden with rigid particles. The method combines the state-of-the-art immersed boundary method (IBM), the multi-direct forcing method, and the lattice Boltzmann method (LBM). First, the multi-direct forcing method is adopted in the improved IBM to better approximate the no-slip/no-penetration (ns/np) condition on the surface of particles. Second, a slight retraction of the Lagrangian grid from the surface towards the interior of particles with a fraction of the Eulerian grid spacing helps increase the convergence accuracy of the method. An over-relaxation technique in the procedure of multi-direct forcing method and the classical fourth order Runge-Kutta scheme in the coupled fluid-particle interaction were applied. The use of the classical fourth order Runge-Kutta scheme helps the overall IB-LBM achieve the second order accuracy and provides more accurate predictions of the translational and rotational motion of particles. The preexistent code with the first-order convergence rate is updated so that the updated new code can resolve the translational and rotational motion of particles with the second-order convergence rate. The updated code has been validated with several benchmark applications. The efficiency of IBM and thus the efficiency of IB-LBM were improved by reducing the number of the Lagragian markers on particles by using a new formula for the number of Lagrangian markers on particle surfaces. The immersed boundary-lattice Boltzmann method (IBLBM) has been shown to predict correctly the angular velocity of a particle. Prior to examining drag force exerted on a cluster of particles, the updated IB-LBM code along with the new formula for the number of Lagrangian markers has been further validated by solving several theoretical problems. Moreover, the unsteadiness of the drag force is examined when a fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. The simulation results agree well with the theories for the short- and long-time behavior of the drag force. Flows through non-rotational and rotational spheres in simple cubic arrays and random arrays are simulated over the entire range of packing fractions, and both low and moderate particle Reynolds numbers to compare the simulated results with the literature results and develop a new drag force formula, a new lift force formula, and a new torque formula. Random arrays of solid particles in fluids are generated with Monte Carlo procedure and Zinchenko's method to avoid crystallization of solid particles over high solid volume fractions. A new drag force formula was developed with extensive simulated results to be closely applicable to real processes over the entire range of packing fractions and both low and moderate particle Reynolds numbers. The simulation results indicate that the drag force is barely affected by rotational Reynolds numbers. Drag force is basically unchanged as the angle of the rotating axis varies.

  17. HARE: Final Report

    SciTech Connect (OSTI)

    Mckie, Jim

    2012-01-09

    This report documents the results of work done over a 6 year period under the FAST-OS programs. The first effort was called Right-Weight Kernels, (RWK) and was concerned with improving measurements of OS noise so it could be treated quantitatively; and evaluating the use of two operating systems, Linux and Plan 9, on HPC systems and determining how these operating systems needed to be extended or changed for HPC, while still retaining their general-purpose nature. The second program, HARE, explored the creation of alternative runtime models, building on RWK. All of the HARE work was done on Plan 9. The HARE researchers were mindful of the very good Linux and LWK work being done at other labs and saw no need to recreate it. Even given this limited funding, the two efforts had outsized impact: _ Helped Cray decide to use Linux, instead of a custom kernel, and provided the tools needed to make Linux perform well _ Created a successor operating system to Plan 9, NIX, which has been taken in by Bell Labs for further development _ Created a standard system measurement tool, Fixed Time Quantum or FTQ, which is widely used for measuring operating systems impact on applications _ Spurred the use of the 9p protocol in several organizations, including IBM _ Built software in use at many companies, including IBM, Cray, and Google _ Spurred the creation of alternative runtimes for use on HPC systems _ Demonstrated that, with proper modifications, a general purpose operating systems can provide communications up to 3 times as effective as user-level libraries Open source was a key part of this work. The code developed for this project is in wide use and available at many places. The core Blue Gene code is available at https://bitbucket.org/ericvh/hare. We describe details of these impacts in the following sections. The rest of this report is organized as follows: First, we describe commercial impact; next, we describe the FTQ benchmark and its impact in more detail; operating systems and runtime research follows; we discuss infrastructure software; and close with a description of the new NIX operating system, future work, and conclusions.

  18. Simulating atmosphere flow for wind energy applications with WRF-LES

    SciTech Connect (OSTI)

    Lundquist, J K; Mirocha, J D; Chow, F K; Kosovic, B; Lundquist, K A

    2008-01-14

    Forecasts of available wind energy resources at high spatial resolution enable users to site wind turbines in optimal locations, to forecast available resources for integration into power grids, to schedule maintenance on wind energy facilities, and to define design criteria for next-generation turbines. This array of research needs implies that an appropriate forecasting tool must be able to account for mesoscale processes like frontal passages, surface-atmosphere interactions inducing local-scale circulations, and the microscale effects of atmospheric stability such as breaking Kelvin-Helmholtz billows. This range of scales and processes demands a mesoscale model with large-eddy simulation (LES) capabilities which can also account for varying atmospheric stability. Numerical weather prediction models, such as the Weather and Research Forecasting model (WRF), excel at predicting synoptic and mesoscale phenomena. With grid spacings of less than 1 km (as is often required for wind energy applications), however, the limits of WRF's subfilter scale (SFS) turbulence parameterizations are exposed, and fundamental problems arise, associated with modeling the scales of motion between those which LES can represent and those for which large-scale PBL parameterizations apply. To address these issues, we have implemented significant modifications to the ARW core of the Weather Research and Forecasting model, including the Nonlinear Backscatter model with Anisotropy (NBA) SFS model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005).We are also modifying WRF's terrain-following coordinate system by implementing an immersed boundary method (IBM) approach to account for the effects of complex terrain. Companion papers presenting idealized simulations with NBA-RSFS-WRF (Mirocha et al.) and IBM-WRF (K. A. Lundquist et al.) are also presented. Observations of flow through the Altamont Pass (Northern California) wind farm are available for validation of the WRF modeling tool for wind energy applications. In this presentation, we use these data to evaluate simulations using the NBA-RSFS-WRF tool in multiple configurations. We vary nesting capabilities, multiple levels of RSFS reconstruction, SFS turbulence models (the new NBA turbulence model versus existing WRF SFS turbulence models) to illustrate the capabilities of the modeling tool and to prioritize recommendations for operational uses. Nested simulations which capture both significant mesoscale processes as well as local-scale stable boundary layer effects are required to effectively predict available wind resources at turbine height.

  19. 2008 ALCF annual report.

    SciTech Connect (OSTI)

    Drugan, C.

    2009-12-07

    The word 'breakthrough' aptly describes the transformational science and milestones achieved at the Argonne Leadership Computing Facility (ALCF) throughout 2008. The number of research endeavors undertaken at the ALCF through the U.S. Department of Energy's (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program grew from 9 in 2007 to 20 in 2008. The allocation of computer time awarded to researchers on the Blue Gene/P also spiked significantly - from nearly 10 million processor hours in 2007 to 111 million in 2008. To support this research, we expanded the capabilities of Intrepid, an IBM Blue Gene/P system at the ALCF, to 557 teraflops (TF) for production use. Furthermore, we enabled breakthrough levels of productivity and capability in visualization and data analysis with Eureka, a powerful installation of NVIDIA Quadro Plex S4 external graphics processing units. Eureka delivered a quantum leap in visual compute density, providing more than 111 TF and more than 3.2 terabytes of RAM. On April 21, 2008, the dedication of the ALCF realized DOE's vision to bring the power of the Department's high performance computing to open scientific research. In June, the IBM Blue Gene/P supercomputer at the ALCF debuted as the world's fastest for open science and third fastest overall. No question that the science benefited from this growth and system improvement. Four research projects spearheaded by Argonne National Laboratory computer scientists and ALCF users were named to the list of top ten scientific accomplishments supported by DOE's Advanced Scientific Computing Research (ASCR) program. Three of the top ten projects used extensive grants of computing time on the ALCF's Blue Gene/P to model the molecular basis of Parkinson's disease, design proteins at atomic scale, and create enzymes. As the year came to a close, the ALCF was recognized with several prestigious awards at SC08 in November. We provided resources for Linear Scaling Divide-and-Conquer Electronic Structure Calculations for Thousand Atom Nanostructures, a collaborative effort between Argonne, Lawrence Berkeley National Laboratory, and Oak Ridge National Laboratory that received the ACM Gordon Bell Prize Special Award for Algorithmic Innovation. The ALCF also was named a winner in two of the four categories in the HPC Challenge best performance benchmark competition.

  20. PDS SHRINK. PDS SHRINK

    SciTech Connect (OSTI)

    Phillion, D.

    1991-12-15

    This code enables one to display, take line-outs on, and perform various transformations on an image created by an array of integer*2 data. Uncompressed eight-bit TIFF files created on either the Macintosh or the IBM PC may also be read in and converted to a 16 bit signed integer image. This code is designed to handle all the formats used for PDS (photo-densitometer) files at the Lawrence Livermore National Laboratory. These formats are all explained by the application code. The image may be zoomed infinitely and the gray scale mapping can be easily changed. Line-outs may be horizontal or vertical with arbitrary width, angled with arbitrary end points, or taken along any path. This code is usually used to examine spectrograph data. Spectral lines may be identified and a polynomial fit from position to wavelength may be found. The image array can be remapped so that the pixels all have the same change of lambda width. It is not necessary to do this, however. Lineouts may be printed, saved as Cricket tab-delimited files, or saved as PICT2 files. The plots may be linear, semilog, or logarithmic with nice values and proper scientific notation. Typically, spectral lines are curved.

  1. xdamp Version 6 : an IDL-based data and image manipulation program.

    SciTech Connect (OSTI)

    Ballard, William Parker

    2012-04-01

    The original DAMP (DAta Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA{trademark} (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix(reg sign)-based workstations, a replacement was needed. This package uses the IDL(reg sign) software, available from Research Systems Incorporated, a Xerox company, in Boulder, Colorado, as the engine, and creates a set of widgets to manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM(reg sign) workstations, Hewlett Packard workstations, SUN(reg sign) workstations, Microsoft(reg sign) Windows{trademark} computers, Macintosh(reg sign) computers and Digital Equipment Corporation VMS(reg sign) and Alpha(reg sign) systems. Thus, xdamp is portable across many platforms. We have verified operation, albeit with some minor IDL bugs, on personal computers using Windows 7 and Windows Vista; Unix platforms; and Macintosh computers. Version 6 is an update that uses the IDL Virtual Machine to resolve the need for licensing IDL.

  2. Code manual for MACCS2: Volume 1, user`s guide

    SciTech Connect (OSTI)

    Chanin, D.I.; Young, M.L.

    1997-03-01

    This report describes the use of the MACCS2 code. The document is primarily a user`s guide, though some model description information is included. MACCS2 represents a major enhancement of its predecessor MACCS, the MELCOR Accident Consequence Code System. MACCS, distributed by government code centers since 1990, was developed to evaluate the impacts of severe accidents at nuclear power plants on the surrounding public. The principal phenomena considered are atmospheric transport and deposition under time-variant meteorology, short- and long-term mitigative actions and exposure pathways, deterministic and stochastic health effects, and economic costs. No other U.S. code that is publicly available at present offers all these capabilities. MACCS2 was developed as a general-purpose tool applicable to diverse reactor and nonreactor facilities licensed by the Nuclear Regulatory Commission or operated by the Department of Energy or the Department of Defense. The MACCS2 package includes three primary enhancements: (1) a more flexible emergency-response model, (2) an expanded library of radionuclides, and (3) a semidynamic food-chain model. Other improvements are in the areas of phenomenological modeling and new output options. Initial installation of the code, written in FORTRAN 77, requires a 486 or higher IBM-compatible PC with 8 MB of RAM.

  3. What then do we do about computer security?

    SciTech Connect (OSTI)

    Suppona, Roger A.; Mayo, Jackson R.; Davis, Christopher Edward; Berg, Michael J.; Wyss, Gregory Dane

    2012-01-01

    This report presents the answers that an informal and unfunded group at SNL provided for questions concerning computer security posed by Jim Gosler, Sandia Fellow (00002). The primary purpose of this report is to record our current answers; hopefully those answers will turn out to be answers indeed. The group was formed in November 2010. In November 2010 Jim Gosler, Sandia Fellow, asked several of us several pointed questions about computer security metrics. Never mind that some of the best minds in the field have been trying to crack this nut without success for decades. Jim asked Campbell to lead an informal and unfunded group to answer the questions. With time Jim invited several more Sandians to join in. We met a number of times both with Jim and without him. At Jim's direction we contacted a number of people outside Sandia who Jim thought could help. For example, we interacted with IBM's T.J. Watson Research Center and held a one-day, videoconference workshop with them on the questions.

  4. A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations

    SciTech Connect (OSTI)

    Nomura, K; Seymour, R; Wang, W; Kalia, R; Nakano, A; Vashishta, P; Shimojo, F; Yang, L H

    2009-02-17

    A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based on hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).

  5. A spreadsheet-coupled SOLGAS: A computerized thermodynamic equilibrium calculation tool. Revision 1

    SciTech Connect (OSTI)

    Trowbridge, L.D.; Leitnaker, J.M.

    1995-07-01

    SOLGAS, an early computer program for calculating equilibrium in a chemical system, has been made more user-friendly, and several ``bells and whistles`` have been added. The necessity to include elemental species has been eliminated. The input of large numbers of starting conditions has been automated. A revised spreadsheet-based format for entering data, including non-ideal binary and ternary mixtures, simplifies and reduces chances for error. Calculational errors by SOLGAS are flagged, and several programming errors are corrected. Auxiliary programs are available to assemble and partially automate plotting of large amounts of data. Thermodynamic input data can be changed on line. The program can be operated with or without a co-processor. Copies of the program, suitable for the IBM-PC or compatibles with at least 384 bytes of low RAM, are available from the authors. This user manual contains appendices with examples of the use of SOLGAS. These range from elementary examples, such as, the relationships among water, ice, and water vapor, to more complex systems: phase diagram calculation of UF{sub 4} and UF{sub 6} system; burning UF{sub 4} in fluorine; thermodynamic calculation of the Cl-F-O-H system; equilibria calculations in the CCl{sub 4}--CH{sub 3}OH system; and limitations applicable to aqueous solutions. An appendix also contains the source code.

  6. Prototype prosperity-diversity game for the Laboratory Development Division of Sandia National Laboratories

    SciTech Connect (OSTI)

    VanDevender, P.; Berman, M.; Savage, K.

    1996-02-01

    The Prosperity Game conducted for the Laboratory Development Division of National Laboratories on May 24--25, 1995, focused on the individual and organizational autonomy plaguing the Department of Energy (DOE)-Congress-Laboratories` ability to manage the wrenching change of declining budgets. Prosperity Games are an outgrowth and adaptation of move/countermove and seminar War Games. Each Prosperity Game is unique in that both the game format and the player contributions vary from game to game. This particular Prosperity Game was played by volunteers from Sandia National Laboratories, Eastman Kodak, IBM, and AT&T. Since the participants fully control the content of the games, the specific outcomes will be different when the team for each laboratory, Congress, DOE, and the Laboratory Operating Board (now Laboratory Operations Board) is composed of executives from those respective organizations. Nevertheless, the strategies and implementing agreements suggest that the Prosperity Games stimulate cooperative behaviors and may permit the executives of the institutions to safely explore the consequences of a family of DOE concert.

  7. High-speed imaging of blood splatter patterns

    SciTech Connect (OSTI)

    McDonald, T.E.; Albright, K.A.; King, N.S.P.; Yates, G.J.; Levine, G.F.

    1993-05-01

    The interpretation of blood splatter patterns is an important element in reconstructing the events and circumstances of an accident or crime scene. Unfortunately, the interpretation of patterns and stains formed by blood droplets is not necessarily intuitive and study and analysis are required to arrive at a correct conclusion. A very useful tool in the study of blood splatter patterns is high-speed photography. Scientists at the Los Alamos National Laboratory, Department of Energy (DOE), and Bureau of Forensic Services, State of California, have assembled a high-speed imaging system designed to image blood splatter patterns. The camera employs technology developed by Los Alamos for the underground nuclear testing program and has also been used in a military mine detection program. The camera uses a solid-state CCD sensor operating at approximately 650 frames per second (75 MPixels per second) with a microchannel plate image intensifier that can provide shuttering as short as 5 ns. The images are captured with a laboratory high-speed digitizer and transferred to an IBM compatible PC for display and hard copy output for analysis. The imaging system is described in this paper.

  8. High-speed imaging of blood splatter patterns

    SciTech Connect (OSTI)

    McDonald, T.E.; Albright, K.A.; King, N.S.P.; Yates, G.J. ); Levine, G.F. . Bureau of Forensic Services)

    1993-01-01

    The interpretation of blood splatter patterns is an important element in reconstructing the events and circumstances of an accident or crime scene. Unfortunately, the interpretation of patterns and stains formed by blood droplets is not necessarily intuitive and study and analysis are required to arrive at a correct conclusion. A very useful tool in the study of blood splatter patterns is high-speed photography. Scientists at the Los Alamos National Laboratory, Department of Energy (DOE), and Bureau of Forensic Services, State of California, have assembled a high-speed imaging system designed to image blood splatter patterns. The camera employs technology developed by Los Alamos for the underground nuclear testing program and has also been used in a military mine detection program. The camera uses a solid-state CCD sensor operating at approximately 650 frames per second (75 MPixels per second) with a microchannel plate image intensifier that can provide shuttering as short as 5 ns. The images are captured with a laboratory high-speed digitizer and transferred to an IBM compatible PC for display and hard copy output for analysis. The imaging system is described in this paper.

  9. Status of the MORSE multigroup Monte Carlo radiation transport code

    SciTech Connect (OSTI)

    Emmett, M.B.

    1993-06-01

    There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.

  10. Application of expert systems for diagnosing equipment failures at central energy plants. Final report

    SciTech Connect (OSTI)

    Moshage, R.; Kantamneni, M.; Schanche, G.; Metea, M.; Blazek, C.

    1993-12-01

    The growing cost of operating and maintaining its central heating plants (CHPs) has forced the Army to seek alternatives to traditional methods of running these facilities. Computer technology offers the potential to automate and assist in many tasks, such as in the diagnosis of equipment malfunctions and failures in Army CHPs. An automated diagnostic tool for heating plant equipment could lower the cost of human labor by freeing personnel for higher priority work. Automatic diagnosis of problems could also reduce downtime for repair, promote thermal efficiency, and improve on-line reliability. Researchers at the U.S. Army Construction Engineering Research Laboratories (USACERL) investigated the application of artificial intelligence (AI) using knowledge-based expert systems to the monitoring and diagnosing of CHP boiler operations. A prototype system (MAD) was developed to Monitor And Diagnose boiler failure or identify inefficient operation, and recommend action to optimize combustion efficiency. The system includes a knowledge base containing rules for diagnosing the condition of major package boiler components. Minimum system requirements for MAD are an IBM-compatible AT-class personal computer (PC) with 640K base memory and 1 megabyte extended memory, 1.5 megabytes of free hard drive space, a color graphics adaptor (CGA), and DOS 3.0 (or higher).

  11. On-line test of signal validation software on the LOBI-MOD2 facility in Ispra, Italy

    SciTech Connect (OSTI)

    Prock, J.; Labeit, M. ); Ohlmer, E. . Joint Research Centre)

    1992-01-01

    A computer program for the detection of abrupt changes in nonhardware redundant measurement signals that uses different methods of analytical redundancy is developed by the Gesellschaft fur Reaktorsicherheit, Garching, Federal Republic of Germany. The program, instrumental fault detection and identification (IFDI) module, validates in real time output signals of power plant components that are scanned at a fixed rate. The IFDI module, implemented on an IBM-compatible personal computer (PC) with an 80386 processor, is tested on-line at the light water reactor off-normal behavior investigations (LOBI-MOD2) facility in the Joint Research Centre, Ispra, Italy, during the loss-of-feedwater experiment BT-15/BT-16 on November 22, 1990. The measurement signals validated by the IFDI module originate from one of the two LOBI-MOD2 facility's steam generators. During the experiment, sensor faults are simulated by falsifying the measurement signals through electrical resistances arranged in series. In this paper questions about the signal validation software and the steam generator's model are dealt with briefly, while the experimental environment and the results obtained are discussed in detail.

  12. Comparison of open-source linear programming solvers.

    SciTech Connect (OSTI)

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin D.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph

    2013-10-01

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.

  13. A performance comparison of current HPC systems: Blue Gene/Q, Cray XE6 and InfiniBand systems

    SciTech Connect (OSTI)

    Kerbyson, Darren J.; Barker, Kevin J.; Vishnu, Abhinav; Hoisie, Adolfy

    2014-01-01

    We present here a performance analysis of three of current architectures that have become commonplace in the High Performance Computing world. Blue Gene/Q is the third generation of systems from IBM that use modestly performing cores but at large-scale in order to achieve high performance. The XE6 is the latest in a long line of Cray systems that use a 3-D topology but the first to use its Gemini interconnection network. InfiniBand provides the flexibility of using compute nodes from many vendors that can be connected in many possible topologies. The performance characteristics of each vary vastly, and the way in which nodes are allocated in each type of system can significantly impact on achieved performance. In this work we compare these three systems using a combination of micro-benchmarks and a set of production applications. In addition we also examine the differences in performance variability observed on each system and quantify the lost performance using a combination of both empirical measurements and performance models. Our results show that significant performance can be lost in normal production operation of the Cray XE6 and InfiniBand Clusters in comparison to Blue Gene/Q.

  14. Global-Address Space Networking (GASNet) Library

    Energy Science and Technology Software Center (OSTI)

    2011-04-06

    GASNet (Global-Address Space Networking) is a language-independent, low-level networking layer that provides network-independent, high-performance communication primitives tailored for implementing parallel global address space SPMD languages such as UPC and Titanium. The interface is primarily intended as a compilation target and for use by runtime library writers (as opposed to end users), and the primary goals are high performance, interface portability, and expressiveness. GASNet is designed specifically to support high-performance, portable implementations of global address spacemore » languages on modern high-end communication networks. The interface provides the flexibility and extensibility required to express a wide variety of communication patterns without sacrificing performance by imposing large computational overheads in the interface. The design of the GASNet interface is partitioned into two layers to maximize porting ease without sacrificing performance: the lower level is a narrow but very general interface called the GASNet core API - the design is basedheavily on Active Messages, and is implemented directly on top of each individual network architecture. The upper level is a wider and more expressive interface called GASNet extended API, which provides high-level operations such as remote memory access and various collective operations. This release implements GASNet over MPI, the Quadrics "elan" API, the Myrinet "GM" API and the "LAPI" interface to the IBM SP switch. A template is provided for adding support for additional network interfaces.« less

  15. User's manual for ONEDANT: a code package for one-dimensional, diffusion-accelerated, neutral-particle transport

    SciTech Connect (OSTI)

    O'Dell, R.D.; Brinkley, F.W. Jr.; Marr, D.R.

    1982-02-01

    ONEDANT is designed for the CDC-7600, but the program has been implemented and run on the IBM-370/190 and CRAY-I computers. ONEDANT solves the one-dimensional multigroup transport equation in plane, cylindrical, spherical, and two-angle plane geometries. Both regular and adjoint, inhomogeneous and homogeneous (k/sub eff/ and eigenvalue search) problems subject to vacuum, reflective, periodic, white, albedo, or inhomogeneous boundary flux conditions are solved. General anisotropic scattering is allowed and anisotropic inhomogeneous sources are permitted. ONEDANT numerically solves the one-dimensional, multigroup form of the neutral-particle, steady-state form of the Boltzmann transport equation. The discrete-ordinates approximation is used for treating the angular variation of the particle distribution and the diamond-difference scheme is used for phase space discretization. Negative fluxes are eliminated by a local set-to-zero-and-correct algorithm. A standard inner (within-group) iteration, outer (energy-group-dependent source) iteration technique is used. Both inner and outer iterations are accelerated using the diffusion synthetic acceleration method. (WHK)

  16. An update on modeling land-ice/ocean interactions in CESM

    SciTech Connect (OSTI)

    Asay-davis, Xylar

    2011-01-24

    This talk is an update on ongoing land-ice/ocean coupling work within the Community Earth System Model (CESM). The coupling method is designed to allow simulation of a fully dynamic ice/ocean interface, while requiring minimal modification to the existing ocean model (the Parallel Ocean Program, POP). The method makes use of an immersed boundary method (IBM) to represent the geometry of the ice-ocean interface without requiring that the computational grid be modified in time. We show many of the remaining development challenges that need to be addressed in order to perform global, century long climate runs with fully coupled ocean and ice sheet models. These challenges include moving to a new grid where the computational pole is no longer at the true south pole and several changes to the coupler (the software tool used to communicate between model components) to allow the boundary between land and ocean to vary in time. We discuss benefits for ice/ocean coupling that would be gained from longer-term ocean model development to allow for natural salt fluxes (which conserve both water and salt mass, rather than water volume).

  17. Department of Defense (DOD) renewables and energy efficiency planning (REEP) program manual

    SciTech Connect (OSTI)

    Nemeth, R.J.; Fournier, D.; Debaillie, L.; Edgar, L.; Stroot, P.; Beasley, R.; Edgar, D.; McMillen, L.; Marren, M.

    1995-08-01

    The Renewables and Energy Efficiency Planning (REEP) program was developed at the US Army Construction Engineering Research Laboratories (USACERL). This program allows for the analysis of 78 energy and water conservation opportunities at 239 major DOD installations. REEP uses a series of algorithms in conjunction with installation specific data to estimate the energy and water conservation potential for entire installations. The program provides the energy, financial, pollution, and social benefits of conservation initiatives. The open architecture of the program allows for simple modification of energy and water conservation variables, and installation database values to allow for individualized analysis. The program is essentially a high-level screening tool that can be used to help identify and focus preliminary conservation studies. The REEP program requires an IBM PC or compatible with a 80386 or 80486 microprocessor. It also requires approximately 4 megabytes of disk space and at least 8 megabytes of RAM. The system was developed for a Windows environment and requires Microsoft Windows 3.1{trademark} or higher to run properly.

  18. A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; Shephard, Mark S.

    2013-01-01

    Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order inmore » a 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less

  19. One-Dimensional Heat Conduction

    Energy Science and Technology Software Center (OSTI)

    1992-03-09

    ICARUS-LLNL was developed to solve one-dimensional planar, cylindrical, or spherical conduction heat transfer problems. The IBM PC version is a family of programs including ICARUSB, an interactive BASIC heat conduction program; ICARUSF, a FORTRAN heat conduction program; PREICAR, a BASIC preprocessor for ICARUSF; and PLOTIC and CPLOTIC, interpretive BASIC and compiler BASIC plot postprocessor programs. Both ICARUSB and ICARUSF account for multiple material regions and complex boundary conditions, such as convection or radiation. In addition,more » ICARUSF accounts for temperature-dependent material properties and time or temperature-dependent boundary conditions. PREICAR is a user-friendly preprocessor used to generate or modify ICARUSF input data. PLOTIC and CPLOTIC generate plots of the temperature or heat flux profile at specified times, plots of the variation of temperature or heat flux with time at selected nodes, or plots of the solution grid. First developed in 1974 to allow easy modeling of complex one-dimensional systems, its original application was in the nuclear explosive testing program. Since then it has undergone extensive revision and been applied to problems dealing with laser fusion target fabrication, heat loads on underground tests, magnetic fusion switching tube anodes, and nuclear waste isolation canisters.« less

  20. Optimization of a Lattice Boltzmann Computation on State-of-the-Art Multicore Platforms

    SciTech Connect (OSTI)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2009-04-10

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon E5345 (Clovertown), AMD Opteron 2214 (Santa Rosa), AMD Opteron 2356 (Barcelona), Sun T5140 T2+ (Victoria Falls), as well as a QS20 IBM Cell Blade. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 15x improvement compared with the original code at a given concurrency. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.

  1. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    SciTech Connect (OSTI)

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  2. Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors

    SciTech Connect (OSTI)

    Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2007-06-01

    Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.

  3. Self-propelled in-tube shuttle and control system for automated measurements of magnetic field alignment

    SciTech Connect (OSTI)

    Boroski, W.N.; Nicol, T.H. ); Pidcoe, S.V. . Space Systems Div.); Zink, R.A. )

    1990-03-01

    A magnetic field alignment gauge is used to measure the field angle as a function of axial position in each of the magnets for the Superconducting Super Collider (SSC). Present measurements are made by manually pushing the through the magnet bore tube and stopping at intervals to record field measurements. Gauge location is controlled through graduation marks and alignment pins on the push rods. Field measurements are recorded on a logging multimeter with tape output. Described is a computerized control system being developed to replace the manual procedure for field alignment measurements. The automated system employs a pneumatic walking device to move the measurement gauge through the bore tube. Movement of the device, called the Self-Propelled In-Tube Shuttle (SPITS), is accomplished through an integral, gas driven, double-acting cylinder. The motion of the SPITS is transferred to the bore tube by means of a pair of controlled, retractable support feet. Control of the SPITS is accomplished through an RS-422 interface from an IBM-compatible computer to a series of solenoid-actuated air valves. Direction of SPITS travel is determined by the air-valve sequence, and is managed through the control software. Precise axial position of the gauge within the magnet is returned to the control system through an optically-encoded digital position transducer attached to the shuttle. Discussed is the performance of the transport device and control system during preliminary testing of the first prototype shuttle. 1 ref., 7 figs.

  4. Software Roadmap to Plug and Play Petaflop/s

    SciTech Connect (OSTI)

    Kramer, Bill; Carter, Jonathan; Skinner, David; Oliker, Lenny; Husbands, Parry; Hargrove, Paul; Shalf, John; Marques, Osni; Ng, Esmond; Drummond, Tony; Yelick, Kathy

    2006-07-31

    In the next five years, the DOE expects to build systemsthat approach a petaflop in scale. In the near term (two years), DOE willhave several near-petaflops systems that are 10 percent to 25 percent ofa peraflop-scale system. A common feature of these precursors to petaflopsystems (such as the Cray XT3 or the IBM BlueGene/L) is that they rely onan unprecedented degree of concurrency, which puts stress on every aspectof HPC system design. Such complex systems will likely break current bestpractices for fault resilience, I/O scaling, and debugging, and evenraise fundamental questions about languages and application programmingmodels. It is important that potential problems are anticipated farenough in advance that they can be addressed in time to prepare the wayfor petaflop-scale systems. This report considers the following fourquestions: (1) What software is on a critical path to make the systemswork? (2) What are the strengths/weaknesses of the vendors and ofexisting vendor solutions? (3) What are the local strengths at the labs?(4) Who are other key players who will play a role and canhelp?

  5. MPH: A Library for Distributed Multi-Component Environment

    Energy Science and Technology Software Center (OSTI)

    2001-05-01

    A growing trend in developing large and complex applications on today's Teraflops compyters is to integrate stand-alone and/or semi-independent program components into a comprehensive simulation package. We develop MPH, a multi-component handshaking library that allows component models recognize and talk to each other in a convenient and consisten way, thus to run multi-component ulti-executable applications effectively on distributed memory architectures. MPH provides the following capabilities: component name registration, resource allocation, inter-component communication, inquiry on themore » multi-component environment, standard in/out redirect. It supports the following four integration mechanisms: Multi-Component Single-Executable (MCSE); Single-Component Multi-Executable (SCME); Multi-Component Multi-Executable (MCME); Multi-instance Multi-Executable (MIME). MPH currently works on IBM SP, SGI Origin, Compaq AlphaSC, Cray T3E, and PC clusters. It is being adopted in NCAR's CCSM and Colorado State University's icosahedra grid coupled model. A joint communicator between any two components could be created. MPI communication between local processors and remote processors are invoked through component names and the local id. More functions are available to inquire the global-id, local-id, number of executales, etc.« less

  6. Karlsruhe Database for Radioactive Wastes (KADABRA) - Accounting and Management System for Radioactive Waste Treatment - 12275

    SciTech Connect (OSTI)

    Himmerkus, Felix; Rittmeyer, Cornelia [WAK Rueckbau- und Entsorgungs- GmbH, 76339 Eggenstein-Leopoldshafen (Germany)

    2012-07-01

    The data management system KADABRA was designed according to the purposes of the Cen-tral Decontamination Department (HDB) of the Wiederaufarbeitungsanlage Karlsruhe Rueckbau- und Entsorgungs-GmbH (WAK GmbH), which is specialized in the treatment and conditioning of radioactive waste. The layout considers the major treatment processes of the HDB as well as regulatory and legal requirements. KADABRA is designed as an SAG ADABAS application on IBM system Z mainframe. The main function of the system is the data management of all processes related to treatment, transfer and storage of radioactive material within HDB. KADABRA records the relevant data concerning radioactive residues, interim products and waste products as well as the production parameters relevant for final disposal. Analytical data from the laboratory and non destructive assay systems, that describe the chemical and radiological properties of residues, production batches, interim products as well as final waste products, can be linked to the respective dataset for documentation and declaration. The system enables the operator to trace the radioactive material through processing and storage. Information on the actual sta-tus of the material as well as radiological data and storage position can be gained immediately on request. A variety of programs accessed to the database allow the generation of individual reports on periodic or special request. KADABRA offers a high security standard and is constantly adapted to the recent requirements of the organization. (authors)

  7. Common Geometry Module

    Energy Science and Technology Software Center (OSTI)

    2005-01-01

    The Common Geometry Module (CGM) is a code library which provides geometry functionality used for mesh generation and other applications. This functionality includes that commonly found in solid modeling engines, like geometry creation, query and modification; CGM also includes capabilities not commonly found in solid modeling engines, like geometry decomposition tools and support for shared material interfaces. CGM is built upon the ACIS solid modeling engine, but also includes geometry capability developed beside and onmore » top of ACIS. CGM can be used as-is to provide geometry functionality for codes needing this capability. However, CGM can also be extended using derived classes in C++, allowing the geometric model to serve as the basis for other applications, for example mesh generation. CGM is supported on Sun Solaris, SGI, HP, IBM, DEC, Linux and Windows NT platforms. CGM also indudes support for loading ACIS models on parallel computers, using MPI-based communication. Future plans for CGM are to port it to different solid modeling engines, including Pro/Engineer or SolidWorks. CGM is being released into the public domain under an LGPL license; the ACIS-based engine is available to ACIS licensees on request.« less

  8. Users manual for the Chameleon parallel programming tools

    SciTech Connect (OSTI)

    Gropp, W.; Smith, B.

    1993-06-01

    Message passing is a common method for writing programs for distributed-memory parallel computers. Unfortunately, the lack of a standard for message passing has hampered the construction of portable and efficient parallel programs. In an attempt to remedy this problem, a number of groups have developed their own message-passing systems, each with its own strengths and weaknesses. Chameleon is a second-generation system of this type. Rather than replacing these existing systems, Chameleon is meant to supplement them by providing a uniform way to access many of these systems. Chameleon`s goals are to (a) be very lightweight (low over-head), (b) be highly portable, and (c) help standardize program startup and the use of emerging message-passing operations such as collective operations on subsets of processors. Chameleon also provides a way to port programs written using PICL or Intel NX message passing to other systems, including collections of workstations. Chameleon is tracking the Message-Passing Interface (MPI) draft standard and will provide both an MPI implementation and an MPI transport layer. Chameleon provides support for heterogeneous computing by using p4 and PVM. Chameleon`s support for homogeneous computing includes the portable libraries p4, PICL, and PVM and vendor-specific implementation for Intel NX, IBM EUI (SP-1), and Thinking Machines CMMD (CM-5). Support for Ncube and PVM 3.x is also under development.

  9. Diagnosing the Causes and Severity of One-sided Message Contention

    SciTech Connect (OSTI)

    Tallent, Nathan R.; Vishnu, Abhinav; van Dam, Hubertus; Daily, Jeffrey A.; Kerbyson, Darren J.; Hoisie, Adolfy

    2015-02-11

    Two trends suggest network contention for one-sided messages is poised to become a performance problem that concerns application developers: an increased interest in one-sided programming models and a rising ratio of hardware threads to network injection bandwidth. Unfortunately, it is difficult to reason about network contention and one-sided messages because one-sided tasks can either decrease or increase contention. We present effective and portable techniques for diagnosing the causes and severity of one-sided message contention. To detect that a message is affected by contention, we maintain statistics representing instantaneous (non-local) network resource demand. Using lightweight measurement and modeling, we identify the portion of a message's latency that is due to contention and whether contention occurs at the initiator or target. We attribute these metrics to program statements in their full static and dynamic context. We characterize contention for an important computational chemistry benchmark on InfiniBand, Cray Aries, and IBM Blue Gene/Q interconnects. We pinpoint the sources of contention, estimate their severity, and show that when message delivery time deviates from an ideal model, there are other messages contending for the same network links. With a small change to the benchmark, we reduce contention up to 50% and improve total runtime as much as 20%.

  10. INTERLINE 5.0 -- An expanded railroad routing model: Program description, methodology, and revised user`s manual

    SciTech Connect (OSTI)

    Johnson, P.E.; Joy, D.S.; Clarke, D.B.; Jacobi, J.M.

    1993-03-01

    A rail routine model, INTERLINE, has been developed at the Oak Ridge National Laboratory to investigate potential routes for transporting radioactive materials. In Version 5.0, the INTERLINE routing algorithms have been enhanced to include the ability to predict alternative routes, barge routes, and population statistics for any route. The INTERLINE railroad network is essentially a computerized rail atlas describing the US railroad system. All rail lines, with the exception of industrial spurs, are included in the network. Inland waterways and deep water routes along with their interchange points with the US railroadsystem are also included. The network contains over 15,000 rail and barge segments (links) and over 13,000 stations, interchange points, ports, and other locations (nodes). The INTERLINE model has been converted to operate on an IBM-compatible personal computer. At least a 286 computer with a hard disk containing approximately 6 MB of free space is recommended. Enhanced program performance will be obtained by using arandom-access memory drive on a 386 or 486 computer.

  11. INTERLINE 5. 0 -- An expanded railroad routing model: Program description, methodology, and revised user's manual

    SciTech Connect (OSTI)

    Johnson, P.E.; Joy, D.S. ); Clarke, D.B.; Jacobi, J.M. . Transportation Center)

    1993-03-01

    A rail routine model, INTERLINE, has been developed at the Oak Ridge National Laboratory to investigate potential routes for transporting radioactive materials. In Version 5.0, the INTERLINE routing algorithms have been enhanced to include the ability to predict alternative routes, barge routes, and population statistics for any route. The INTERLINE railroad network is essentially a computerized rail atlas describing the US railroad system. All rail lines, with the exception of industrial spurs, are included in the network. Inland waterways and deep water routes along with their interchange points with the US railroadsystem are also included. The network contains over 15,000 rail and barge segments (links) and over 13,000 stations, interchange points, ports, and other locations (nodes). The INTERLINE model has been converted to operate on an IBM-compatible personal computer. At least a 286 computer with a hard disk containing approximately 6 MB of free space is recommended. Enhanced program performance will be obtained by using arandom-access memory drive on a 386 or 486 computer.

  12. Petascale algorithms for reactor hydrodynamics.

    SciTech Connect (OSTI)

    Fischer, P.; Lottes, J.; Pointer, W. D.; Siegel, A.

    2008-01-01

    We describe recent algorithmic developments that have enabled large eddy simulations of reactor flows on up to P = 65, 000 processors on the IBM BG/P at the Argonne Leadership Computing Facility. Petascale computing is expected to play a pivotal role in the design and analysis of next-generation nuclear reactors. Argonne's SHARP project is focused on advanced reactor simulation, with a current emphasis on modeling coupled neutronics and thermal-hydraulics (TH). The TH modeling comprises a hierarchy of computational fluid dynamics approaches ranging from detailed turbulence computations, using DNS (direct numerical simulation) and LES (large eddy simulation), to full core analysis based on RANS (Reynolds-averaged Navier-Stokes) and subchannel models. Our initial study is focused on LES of sodium-cooled fast reactor cores. The aim is to leverage petascale platforms at DOE's Leadership Computing Facilities (LCFs) to provide detailed information about heat transfer within the core and to provide baseline data for less expensive RANS and subchannel models.

  13. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    SciTech Connect (OSTI)

    O'Brien, M J; Brantley, P S

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domains replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.

  14. Scalable Equation of State Capability

    SciTech Connect (OSTI)

    Epperly, T W; Fritsch, F N; Norquist, P D; Sanford, L A

    2007-12-03

    The purpose of this techbase project was to investigate the use of parallel array data types to reduce the memory footprint of the Livermore Equation Of State (LEOS) library. Addressing the memory scalability of LEOS is necessary to run large scientific simulations on IBM BG/L and future architectures with low memory per processing core. We considered using normal MPI, one-sided MPI, and Global Arrays to manage the distributed array and ended up choosing Global Arrays because it was the only communication library that provided the level of asynchronous access required. To reduce the runtime overhead using a parallel array data structure, a least recently used (LRU) caching algorithm was used to provide a local cache of commonly used parts of the parallel array. The approach was initially implemented in a isolated copy of LEOS and was later integrated into the main trunk of the LEOS Subversion repository. The approach was tested using a simple test. Testing indicated that the approach was feasible, and the simple LRU caching had a 86% hit rate.

  15. Coal Preparation Plant Simulation

    Energy Science and Technology Software Center (OSTI)

    1992-02-25

    COALPREP assesses the degree of cleaning obtained with different coal feeds for a given plant configuration and mode of operation. It allows the user to simulate coal preparation plants to determine an optimum plant configuration for a given degree of cleaning. The user can compare the performance of alternative plant configurations as well as determine the impact of various modes of operation for a proposed configuration. The devices that can be modelled include froth flotationmore » devices, washers, dewatering equipment, thermal dryers, rotary breakers, roll crushers, classifiers, screens, blenders and splitters, and gravity thickeners. The user must specify the plant configuration and operating conditions and a description of the coal feed. COALPREP then determines the flowrates within the plant and a description of each flow stream (i.e. the weight distribution, percent ash, pyritic sulfur and total sulfur, moisture, BTU content, recoveries, and specific gravity of separation). COALPREP also includes a capability for calculating the cleaning cost per ton of coal. The IBM PC version contains two auxiliary programs, DATAPREP and FORLIST. DATAPREP is an interactive preprocessor for creating and editing COALPREP input data. FORLIST converts carriage-control characters in FORTRAN output data to ASCII line-feed (X''0A'') characters.« less

  16. WINDOW 4.0: Program description. A PC program for analyzing the thermal performance of fenestration products

    SciTech Connect (OSTI)

    Not Available

    1992-03-01

    WINDOW 4.0 is a publicly available IBM PC compatible computer program developed by the Windows and Daylighting Group at Lawrence Berkeley Laboratory for calculating total window thermal performance indices (e.g. U-values, solar heat gain coefficients, shading coefficients, and visible transmittances). WINDOW 4.0 provides a versatile heat transfer analysis method consistent with the rating procedure developed by the National Fenestration Rating Council (NFRC). The program can be used to design and develop new products, to rate and compare performance characteristics of all types of window products, to assist educators in teaching heat transfer through windows, and to help public officials in developing building energy codes. WINDOW 4.0 is a major revision to WINDOW 3.1 and we strongly urge all users to read this manual before using the program. Users who need professional assistance with the WINDOW 4.0 program or other window performance simulation issues are encouraged to contact one or more of the NFRC-accredited Simulation Laboratories. A list of these accredited simulation professionals is available from the NFRC.

  17. WINDOW 4. 0: Program description. A PC program for analyzing the thermal performance of fenestration products

    SciTech Connect (OSTI)

    Not Available

    1992-03-01

    WINDOW 4.0 is a publicly available IBM PC compatible computer program developed by the Windows and Daylighting Group at Lawrence Berkeley Laboratory for calculating total window thermal performance indices (e.g. U-values, solar heat gain coefficients, shading coefficients, and visible transmittances). WINDOW 4.0 provides a versatile heat transfer analysis method consistent with the rating procedure developed by the National Fenestration Rating Council (NFRC). The program can be used to design and develop new products, to rate and compare performance characteristics of all types of window products, to assist educators in teaching heat transfer through windows, and to help public officials in developing building energy codes. WINDOW 4.0 is a major revision to WINDOW 3.1 and we strongly urge all users to read this manual before using the program. Users who need professional assistance with the WINDOW 4.0 program or other window performance simulation issues are encouraged to contact one or more of the NFRC-accredited Simulation Laboratories. A list of these accredited simulation professionals is available from the NFRC.

  18. Second update The Gordon Bell Competetion entry gb110s2

    SciTech Connect (OSTI)

    Vranas, P; Soltz, R

    2006-11-12

    Since the update to our entry of October 20th we have just made a significant improvement. We understand that this is past the deadline for updates and very close to the conference date. However, Lawrence Livermore National Laboratory has just updated the BG/L system software on their full 64 BG/L supercomputer to IBM-BGL Release 3. As we discussed in our update of October 20 this release includes our custom L1 and SRAM access functions that allow us to achieve higher sustained performance. Just a few hours ago we got access to the full system and obtained the fastest sustained performance point. In the full 131,072 CPU-cores system QCD sustains 70.9 Teraflops for the Dirac operator and 67.9 teraflops for the full Conjugate Gradient inverter. This is about 20% faster than our last update. We attach the corresponding speedup figure. As you can tell the speedup is perfect. This figure is the same as Figure 1 of our October 20th update except that it now includes the 131,072 CPU-cores point.

  19. Final Report: Performance Modeling Activities in PERC2

    SciTech Connect (OSTI)

    Allan Snavely

    2007-02-25

    Progress in Performance Modeling for PERC2 resulted in: • Automated modeling tools that are robust, able to characterize large applications running at scale while simultaneously simulating the memory hierarchies of mul-tiple machines in parallel. • Porting of the requisite tracer tools to multiple platforms. • Improved performance models by using higher resolution memory models that ever before. • Adding control-flow and data dependency analysis to the tracers used in perform-ance tools. • Exploring and developing several new modeling methodologies. • Using modeling tools to develop performance models for strategic codes. • Application of modeling methodology to make a large number of “blind” per-formance predictions on certain mission partner applications, targeting most cur-rently available system architectures. • Error analysis to correct some systematic biases encountered as part of the large-scale blind prediction exercises. • Addition of instrumentation capabilities for communication libraries other than MPI. • Dissemination the tools and modeling methods to several mission partners, in-cluding DoD HPCMO and two DARPA HPCS vendors (Cray and IBM), as well as to the wider HPC community via a series of tutorials.

  20. Engineering Design Information System (EDIS)

    SciTech Connect (OSTI)

    Smith, P.S.; Short, R.D.; Schwarz, R.K.

    1990-11-01

    This manual is a guide to the use of the Engineering Design Information System (EDIS) Phase I. The system runs on the Martin Marietta Energy Systems, Inc., IBM 3081 unclassified computer. This is the first phase in the implementation of EDIS, which is an index, storage, and retrieval system for engineering documents produced at various plants and laboratories operated by Energy Systems for the Department of Energy. This manual presents on overview of EDIS, describing the system's purpose; the functions it performs; hardware, software, and security requirements; and help and error functions. This manual describes how to access EDIS and how to operate system functions using Database 2 (DB2), Time Sharing Option (TSO), Interactive System Productivity Facility (ISPF), and Soft Master viewing features employed by this system. Appendix A contains a description of the Soft Master viewing capabilities provided through the EDIS View function. Appendix B provides examples of the system error screens and help screens for valid codes used for screen entry. Appendix C contains a dictionary of data elements and descriptions.

  1. Computer simulation of coal preparation plants. Part 2. User's manual. Final report

    SciTech Connect (OSTI)

    Gottfried, B.S.; Tierney, J.W.

    1985-12-01

    This report describes a comprehensive computer program that allows the user to simulate the performance of realistic coal preparation plants. The program is very flexible in the sense that it can accommodate any particular plant configuration that may be of interest. This allows the user to compare the performance of different plant configurations and to determine the impact of various modes of operation with the same configuration. In addition, the program can be used to assess the degree of cleaning obtained with different coal feeds for a given plant configuration and a given mode of operation. Use of the simulator requires that the user specify the appearance of the plant configuration, the plant operating conditions, and a description of the coal feed. The simulator will then determine the flowrates within the plant, and a description of each flowrate (i.e., the weight distribution, percent ash, pyritic sulfur and total sulfur, moisture, and Btu content). The simulation program has been written in modular form using the Fortran language. It can be implemented on a great many different types of computers, ranging from large scientific mainframes to IBM-type personal computers with a fixed disk. Some customization may be required, however, to ensure compatibility with the features of Fortran available on a particular computer. Part I of this report contains a general description of the methods used to carry out the simulation. Each of the major types of units is described separately, in addition to a description of the overall system analysis. Part II is intended as a user's manual. It contains a listing of the mainframe version of the program, instructions for its use (on both a mainframe and a microcomputer), and output for a representative sample problem.

  2. Guide to verification and validation of the SCALE-4 criticality safety software

    SciTech Connect (OSTI)

    Emmett, M.B.; Jordan, W.C.

    1996-12-01

    Whenever a decision is made to newly install the SCALE nuclear criticality safety software on a computer system, the user should run a set of verification and validation (V&V) test cases to demonstrate that the software is properly installed and functioning correctly. This report is intended to serve as a guide for this V&V in that it specifies test cases to run and gives expected results. The report describes the V&V that has been performed for the nuclear criticality safety software in a version of SCALE-4. The verification problems specified by the code developers have been run, and the results compare favorably with those in the SCALE 4.2 baseline. The results reported in this document are from the SCALE 4.2P version which was run on an IBM RS/6000 workstation. These results verify that the SCALE-4 nuclear criticality safety software has been correctly installed and is functioning properly. A validation has been performed for KENO V.a utilizing the CSAS25 criticality sequence and the SCALE 27-group cross-section library for {sup 233}U, {sup 235}U, and {sup 239}Pu fissile, systems in a broad range of geometries and fissile fuel forms. The experimental models used for the validation were taken from three previous validations of KENO V.a. A statistical analysis of the calculated results was used to determine the average calculational bias and a subcritical k{sub eff} criteria for each class of systems validated. Included the statistical analysis is a means of estimating the margin of subcriticality in k{sub eff}. This validation demonstrates that KENO V.a and the 27-group library may be used for nuclear criticality safety computations provided the system being analyzed falls within the range of the experiments used in the validation.

  3. Adversary Sequence Interruption Model

    Energy Science and Technology Software Center (OSTI)

    1985-11-15

    PC EASI is an IBM personal computer or PC-compatible version of an analytical technique for measuring the effectiveness of physical protection systems. PC EASI utilizes a methodology called Estimate of Adversary Sequence Interruption (EASI) which evaluates the probability of interruption (PI) for a given sequence of adversary tasks. Probability of interruption is defined as the probability that the response force will arrive before the adversary force has completed its task. The EASI methodology is amore » probabilistic approach that analytically evaluates basic functions of the physical security system (detection, assessment, communications, and delay) with respect to response time along a single adversary path. It is important that the most critical scenarios for each target be identified to ensure that vulnerabilities have not been overlooked. If the facility is not overly complex, this can be accomplished by examining all paths. If the facility is complex, a global model such as Safeguards Automated Facility Evaluation (SAFE) may be used to identify the most vulnerable paths. PC EASI is menu-driven with screen forms for entering and editing the basic scenarios. In addition to evaluating PI for the basic scenario, the sensitivities of many of the parameters chosen in the scenario can be analyzed. These sensitivities provide information to aid the analyst in determining the tradeoffs for reducing the probability of interruption. PC EASI runs under the Micro Data Base Systems'' proprietary database management system Knowledgeman. KMAN provides the user environment and file management for the specified basic scenarios, and KGRAPH the graphical output of the sensitivity calculations. This software is not included. Due to errors in release 2 of KMAN, PC EASI will not execute properly; release 1.07 of KMAN is required.« less

  4. Studies of acute and chronic radiation injury at the Biological and Medical Research Division, Argonne National Laboratory, 1953-1970: Description of individual studies, data files, codes, and summaries of significant findings

    SciTech Connect (OSTI)

    Grahn, D.; Fox, C.; Wright, B.J.; Carnes, B.A.

    1994-05-01

    Between 1953 and 1970, studies on the long-term effects of external x-ray and {gamma} irradiation on inbred and hybrid mouse stocks were carried out at the Biological and Medical Research Division, Argonne National Laboratory. The results of these studies, plus the mating, litter, and pre-experimental stock records, were routinely coded on IBM cards for statistical analysis and record maintenance. Also retained were the survival data from studies performed in the period 1943-1953 at the National Cancer Institute, National Institutes of Health, Bethesda, Maryland. The card-image data files have been corrected where necessary and refiled on hard disks for long-term storage and ease of accessibility. In this report, the individual studies and data files are described, and pertinent factors regarding caging, husbandry, radiation procedures, choice of animals, and other logistical details are summarized. Some of the findings are also presented. Descriptions of the different mouse stocks and hybrids are included in an appendix; more than three dozen stocks were involved in these studies. Two other appendices detail the data files in their original card-image format and the numerical codes used to describe the animal`s exit from an experiment and, for some studies, any associated pathologic findings. Tabular summaries of sample sizes, dose levels, and other variables are also given to assist investigators in their selection of data for analysis. The archive is open to any investigator with legitimate interests and a willingness to collaborate and acknowledge the source of the data and to recognize appropriate conditions or caveats.

  5. Scalable Performance Measurement and Analysis

    SciTech Connect (OSTI)

    Gamblin, T

    2009-10-27

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  6. QA procedures and emissions from nonstandard sources in AQUIS, a PC-based emission inventory and air permit manager

    SciTech Connect (OSTI)

    Smith, A.E.; Tschanz, J.; Monarch, M.

    1996-05-01

    The Air Quality Utility Information System (AQUIS) is a database management system that operates under dBASE IV. It runs on an IBM-compatible personal computer (PC) with MS DOS 5.0 or later, 4 megabytes of memory, and 30 megabytes of disk space. AQUIS calculates emissions for both traditional and toxic pollutants and reports emissions in user-defined formats. The system was originally designed for use at 7 facilities of the Air Force Materiel Command, and now more than 50 facilities use it. Within the last two years, the system has been used in support of Title V permit applications at Department of Defense facilities. Growth in the user community, changes and additions to reference emission factor data, and changing regulatory requirements have demanded additions and enhancements to the system. These changes have ranged from adding or updating an emission factor to restructuring databases and adding new capabilities. Quality assurance (QA) procedures have been developed to ensure that emission calculations are correct even when databases are reconfigured and major changes in calculation procedures are implemented. This paper describes these QA and updating procedures. Some user facilities include light industrial operations associated with aircraft maintenance. These facilities have operations such as fiberglass and composite layup and plating operations for which standard emission factors are not available or are inadequate. In addition, generally applied procedures such as material balances may need special treatment to work in an automated environment, for example, in the use of oils and greases and when materials such as polyurethane paints react chemically during application. Some techniques used in these situations are highlighted here. To provide a framework for the main discussions, this paper begins with a description of AQUIS.

  7. CPS and the Fermilab farms

    SciTech Connect (OSTI)

    Fausey, M.R.

    1992-06-01

    Cooperative Processes Software (CPS) is a parallel programming toolkit developed at the Fermi National Accelerator Laboratory. It is the most recent product in an evolution of systems aimed at finding a cost-effective solution to the enormous computing requirements in experimental high energy physics. Parallel programs written with CPS are large-grained, which means that the parallelism occurs at the subroutine level, rather than at the traditional single line of code level. This fits the requirements of high energy physics applications, such as event reconstruction, or detector simulations, quite well. It also satisfies the requirements of applications in many other fields. One example is in the pharmaceutical industry. In the field of computational chemistry, the process of drug design may be accelerated with this approach. CPS programs run as a collection of processes distributed over many computers. CPS currently supports a mixture of heterogeneous UNIX-based workstations which communicate over networks with TCP/IR CPS is most suited for jobs with relatively low I/O requirements compared to CPU. The CPS toolkit supports message passing remote subroutine calls, process synchronization, bulk data transfers, and a mechanism called process queues, by which one process can find another which has reached a particular state. The CPS software supports both batch processing and computer center operations. The system is currently running in production mode on two farms of processors at Fermilab. One farm consists of approximately 90 IBM RS/6000 model 320 workstations, and the other has 85 Silicon Graphics 4D/35 workstations. This paper first briefly describes the history of parallel processing at Fermilab which lead to the development of CPS. Then the CPS software and the CPS Batch queueing system are described. Finally, the experiences of using CPS in production on the Fermilab processor farms are described.

  8. A Big Data Approach to Analyzing Market Volatility

    SciTech Connect (OSTI)

    Wu, Kesheng; Bethel, E. Wes; Gu, Ming; Leinweber, David; Ruebel, Oliver

    2013-06-05

    Understanding the microstructure of the financial market requires the processing of a vast amount of data related to individual trades, and sometimes even multiple levels of quotes. Analyzing such a large volume of data requires tremendous computing power that is not easily available to financial academics and regulators. Fortunately, public funded High Performance Computing (HPC) power is widely available at the National Laboratories in the US. In this paper we demonstrate that the HPC resource and the techniques for data-intensive sciences can be used to greatly accelerate the computation of an early warning indicator called Volume-synchronized Probability of Informed trading (VPIN). The test data used in this study contains five and a half year?s worth of trading data for about 100 most liquid futures contracts, includes about 3 billion trades, and takes 140GB as text files. By using (1) a more efficient file format for storing the trading records, (2) more effective data structures and algorithms, and (3) parallelizing the computations, we are able to explore 16,000 different ways of computing VPIN in less than 20 hours on a 32-core IBM DataPlex machine. Our test demonstrates that a modest computer is sufficient to monitor a vast number of trading activities in real-time ? an ability that could be valuable to regulators. Our test results also confirm that VPIN is a strong predictor of liquidity-induced volatility. With appropriate parameter choices, the false positive rates are about 7percent averaged over all the futures contracts in the test data set. More specifically, when VPIN values rise above a threshold (CDF > 0.99), the volatility in the subsequent time windows is higher than the average in 93percent of the cases.

  9. Quantitative genetic activity graphical profiles for use in chemical evaluation

    SciTech Connect (OSTI)

    Waters, M.D.; Stack, H.F.; Garrett, N.E.; Jackson, M.A.

    1990-12-31

    A graphic approach, terms a Genetic Activity Profile (GAP), was developed to display a matrix of data on the genetic and related effects of selected chemical agents. The profiles provide a visual overview of the quantitative (doses) and qualitative (test results) data for each chemical. Either the lowest effective dose or highest ineffective dose is recorded for each agent and bioassay. Up to 200 different test systems are represented across the GAP. Bioassay systems are organized according to the phylogeny of the test organisms and the end points of genetic activity. The methodology for producing and evaluating genetic activity profile was developed in collaboration with the International Agency for Research on Cancer (IARC). Data on individual chemicals were compiles by IARC and by the US Environmental Protection Agency (EPA). Data are available on 343 compounds selected from volumes 1-53 of the IARC Monographs and on 115 compounds identified as Superfund Priority Substances. Software to display the GAPs on an IBM-compatible personal computer is available from the authors. Structurally similar compounds frequently display qualitatively and quantitatively similar profiles of genetic activity. Through examination of the patterns of GAPs of pairs and groups of chemicals, it is possible to make more informed decisions regarding the selection of test batteries to be used in evaluation of chemical analogs. GAPs provided useful data for development of weight-of-evidence hazard ranking schemes. Also, some knowledge of the potential genetic activity of complex environmental mixtures may be gained from an assessment of the genetic activity profiles of component chemicals. The fundamental techniques and computer programs devised for the GAP database may be used to develop similar databases in other disciplines. 36 refs., 2 figs.

  10. DYNA3D, INGRID, and TAURUS: an integrated, interactive software system for crashworthiness engineering

    SciTech Connect (OSTI)

    Benson, D.J.; Hallquist, J.O.; Stillman, D.W.

    1985-04-01

    Crashworthiness engineering has always been a high priority at Lawrence Livermore National Laboratory because of its role in the safe transport of radioactive material for the nuclear power industry and military. As a result, the authors have developed an integrated, interactive set of finite element programs for crashworthiness analysis. The heart of the system is DYNA3D, an explicit, fully vectorized, large deformation structural dynamics code. DYNA3D has the following four capabilities that are critical for the efficient and accurate analysis of crashes: (1) fully nonlinear solid, shell, and beam elements for representing a structure, (2) a broad range of constitutive models for representing the materials, (3) sophisticated contact algorithms for the impact interactions, and (4) a rigid body capability to represent the bodies away from the impact zones at a greatly reduced cost without sacrificing any accuracy in the momentum calculations. To generate the large and complex data files for DYNA3D, INGRID, a general purpose mesh generator, is used. It runs on everything from IBM PCs to CRAYS, and can generate 1000 nodes/minute on a PC. With its efficient hidden line algorithms and many options for specifying geometry, INGRID also doubles as a geometric modeller. TAURUS, an interactive post processor, is used to display DYNA3D output. In addition to the standard monochrome hidden line display, time history plotting, and contouring, TAURUS generates interactive color displays on 8 color video screens by plotting color bands superimposed on the mesh which indicate the value of the state variables. For higher quality color output, graphic output files may be sent to the DICOMED film recorders. We have found that color is every bit as important as hidden line removal in aiding the analyst in understanding his results. In this paper the basic methodologies of the programs are presented along with several crashworthiness calculations.

  11. THE LOS ALAMOS NATIONAL LABORATORY ATMOSPHERIC TRANSPORT AND DIFFUSION MODELS

    SciTech Connect (OSTI)

    M. WILLIAMS

    1999-08-01

    The LANL atmospheric transport and diffusion models are composed of two state-of-the-art computer codes. The first is an atmospheric wind model called HOThlAC, Higher Order Turbulence Model for Atmospheric circulations. HOTMAC generates wind and turbulence fields by solving a set of atmospheric dynamic equations. The second is an atmospheric diffusion model called RAPTAD, Random Particle Transport And Diffusion. RAPTAD uses the wind and turbulence output from HOTMAC to compute particle trajectories and concentration at any location downwind from a source. Both of these models, originally developed as research codes on supercomputers, have been modified to run on microcomputers. Because the capability of microcomputers is advancing so rapidly, the expectation is that they will eventually become as good as today's supercomputers. Now both models are run on desktop or deskside computers, such as an IBM PC/AT with an Opus Pm 350-32 bit coprocessor board and a SUN workstation. Codes have also been modified so that high level graphics, NCAR Graphics, of the output from both models are displayed on the desktop computer monitors and plotted on a laser printer. Two programs, HOTPLT and RAPLOT, produce wind vector plots of the output from HOTMAC and particle trajectory plots of the output from RAPTAD, respectively. A third CONPLT provides concentration contour plots. Section II describes step-by-step operational procedures, specifically for a SUN-4 desk side computer, on how to run main programs HOTMAC and RAPTAD, and graphics programs to display the results. Governing equations, boundary conditions and initial values of HOTMAC and RAPTAD are discussed in Section III. Finite-difference representations of the governing equations, numerical solution procedures, and a grid system are given in Section IV.

  12. RTAP evaluation

    SciTech Connect (OSTI)

    Cupps, K.; Elko, S.; Folta, P.

    1995-01-23

    An in-depth analysis of the RTAP product was undertaken within the CNC associate program to determine the feasibility of utilizing it to replace the current Supervisory Control System that supports the AVLIS program. This document contains the results of that evaluation. With some fundamental redesign the current Supervisory Control system could meet the needs described above. The redesign would require a large amount of software rewriting and would be very time consuming. The higher level functionality (alarming, automation, etc.) would have to wait until its completion. Our current understanding and preliminary testing indicate that using commercial software is the best way to get these new features at the minimum cost to the program. Additional savings will be obtained by moving the maintenance costs of the basic control system from in-house to commercial industry and allowing our developers to concentrate on the unique control areas that require customization. Our current operating system, VMS, has become a hindrance. The UNIX operating system has become the choice for most scientific and engineering systems and we should follow suit. As a result of the commercial system survey referenced above we selected RTAP, a SCADA product developed by Hewlett Packard (HP), as the most favorable product to replace the current supervisory system in AVLIS. It is an extremely open system, with a large, well defined Application Programming Interface (API). This will allow the seamless integration of unique front end devices in the laser area (e.g. Optical Device Controller). RTAP also possesses various functionality that is lacking in our current system: integrated alarming, real-time configurable database, system scalability, and a Sequence Control Language (SQL developed by CPU, an RTAP Channel Partner) that will facilitate the automation necessary to bring the AVLIS process to plant-line operation. It runs on HP-9000, DEC-Alpha, IBM-RS6000 and Sun Workstations.

  13. Nesting large-eddy simulations within mesoscale simulations for wind energy applications

    SciTech Connect (OSTI)

    Lundquist, J K; Mirocha, J D; Chow, F K; Kosovic, B; Lundquist, K A

    2008-09-08

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES), which resolve individual atmospheric eddies on length scales smaller than turbine blades and account for complex terrain, are possible with a range of commercial and open-source software, including the Weather Research and Forecasting (WRF) model. In addition to 'local' sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting that a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecasting model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain.

  14. Argonne National Laboratory Physics Division annual report, January--December 1996

    SciTech Connect (OSTI)

    Thayer, K.J.

    1997-08-01

    The past year has seen several of the Physics Division`s new research projects reach major milestones with first successful experiments and results: the atomic physics station in the Basic Energy Sciences Research Center at the Argonne Advanced Photon Source was used in first high-energy, high-brilliance x-ray studies in atomic and molecular physics; the Short Orbit Spectrometer in Hall C at the Thomas Jefferson National Accelerator (TJNAF) Facility that the Argonne medium energy nuclear physics group was responsible for, was used extensively in the first round of experiments at TJNAF; at ATLAS, several new beams of radioactive isotopes were developed and used in studies of nuclear physics and nuclear astrophysics; the new ECR ion source at ATLAS was completed and first commissioning tests indicate excellent performance characteristics; Quantum Monte Carlo calculations of mass-8 nuclei were performed for the first time with realistic nucleon-nucleon interactions using state-of-the-art computers, including Argonne`s massively parallel IBM SP. At the same time other future projects are well under way: preparations for the move of Gammasphere to ATLAS in September 1997 have progressed as planned. These new efforts are imbedded in, or flowing from, the vibrant ongoing research program described in some detail in this report: nuclear structure and reactions with heavy ions; measurements of reactions of astrophysical interest; studies of nucleon and sub-nucleon structures using leptonic probes at intermediate and high energies; atomic and molecular structure with high-energy x-rays. The experimental efforts are being complemented with efforts in theory, from QCD to nucleon-meson systems to structure and reactions of nuclei. Finally, the operation of ATLAS as a national users facility has achieved a new milestone, with 5,800 hours beam on target for experiments during the past fiscal year.

  15. TEDANN: Turbine engine diagnostic artificial neural network

    SciTech Connect (OSTI)

    Kangas, L.J.; Greitzer, F.L.; Illi, O.J. Jr.

    1994-03-17

    The initial focus of TEDANN is on AGT-1500 fuel flow dynamics: that is, fuel flow faults detectable in the signals from the Electronic Control Unit`s (ECU) diagnostic connector. These voltage signals represent the status of the Electro-Mechanical Fuel System (EMFS) in response to ECU commands. The EMFS is a fuel metering device that delivers fuel to the turbine engine under the management of the ECU. The ECU is an analog computer whose fuel flow algorithm is dependent upon throttle position, ambient air and turbine inlet temperatures, and compressor and turbine speeds. Each of these variables has a representative voltage signal available at the ECU`s J1 diagnostic connector, which is accessed via the Automatic Breakout Box (ABOB). The ABOB is a firmware program capable of converting 128 separate analog data signals into digital format. The ECU`s J1 diagnostic connector provides 32 analog signals to the ABOB. The ABOB contains a 128 to 1 multiplexer and an analog-to-digital converter, CP both operated by an 8-bit embedded controller. The Army Research Laboratory (ARL) developed and published the hardware specifications as well as the micro-code for the ABOB Intel EPROM processor and the internal code for the multiplexer driver subroutine. Once the ECU analog readings are converted into a digital format, the data stream will be input directly into TEDANN via the serial RS-232 port of the Contact Test Set (CTS) computer. The CTS computer is an IBM compatible personal computer designed and constructed for tactical use on the battlefield. The CTS has a 50MHz 32-bit Intel 80486DX processor. It has a 200MB hard drive and 8MB RAM. The CTS also has serial, parallel and SCSI interface ports. The CTS will also host a frame-based expert system for diagnosing turbine engine faults (referred to as TED; not shown in Figure 1).

  16. RAMONA-4B development for SBWR safety studies

    SciTech Connect (OSTI)

    Rohatgi, U.S.; Aronson, A.L.; Cheng, H.S.; Khan, H.J.; Mallen, A.N.

    1993-12-31

    The Simplified Boiling Water Reactor (SBWR) is a revolutionary design of a boiling-water reactor. The reactor is based on passive safety systems such as natural circulation, gravity flow, pressurized gas, and condensation. SBWR has no active systems, and the flow in the vessel is by natural circulation. There is a large chimney section above the core to provide a buoyancy head for natural circulation. The reactor can be shut down by either of four systems; namely, scram, Fine Motion Control Rod Drive (FMCRD), Alternate Rod Insertion (ARI), and Standby Liquid Control System (SLCS). The safety injection is by gravity drain from the Gravity Driven Cooling System (GDCS) and Suppression Pool (SP). The heat sink is through two types of heat exchangers submerged in the tank of water. These heat exchangers are the Isolation Condenser (IC) and the Passive Containment Cooling System (PCCS). The RAMONA-4B code has been developed to simulate the normal operation, reactivity transients, and to address the instability issues for SBWR. The code has a three-dimensional neutron kinetics coupled to multiple parallel-channel thermal-hydraulics. The two-phase thermal hydraulics is based on a nonhomogeneous nonequilibrium drift-flux formulation. It employs an explicit integration to solve all state equations (except for neutron kinetics) in order to predict the instability without numerical damping. The objective of this project is to develop a Sun SPARC and IBM RISC 6000 based RAMONA-4B code for applications to SBWR safety analyses, in particular for stability and ATWS studies.

  17. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    SciTech Connect (OSTI)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept of a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.

  18. The Secret Life of Quarks, Final Report for the University of North Carolina at Chapel Hill

    SciTech Connect (OSTI)

    Fowler, Robert

    2012-12-10

    This final report summarizes activities and results at the University of North Carolina as part of the the SciDAC-2 Project The Secret Life of Quarks: National Computational Infrastructure for Lattice Quantum Chromodynamics. The overall objective of the project is to construct the software needed to study quantum chromo- dynamics (QCD), the theory of the strong interactions of subatomic physics, and similar strongly coupled gauge theories anticipated to be of importance in the LHC era. It built upon the successful efforts of the SciDAC-1 project National Computational Infrastructure for Lattice Gauge Theory, in which a QCD Applications Programming Interface (QCD API) was developed that enables lat- tice gauge theorists to make effective use of a wide variety of massively parallel computers. In the SciDAC-2 project, optimized versions of the QCD API were being created for the IBM Blue- Gene/L (BG/L) and BlueGene/P (BG/P), the Cray XT3/XT4 and its successors, and clusters based on multi-core processors and Infiniband communications networks. The QCD API is being used to enhance the performance of the major QCD community codes and to create new applications. Software libraries of physics tools have been expanded to contain sharable building blocks for inclusion in application codes, performance analysis and visualization tools, and software for au- tomation of physics work flow. New software tools were designed for managing the large data sets generated in lattice QCD simulations, and for sharing them through the International Lattice Data Grid consortium. As part of the overall project, researchers at UNC were funded through ASCR to work in three general areas. The main thrust has been performance instrumentation and analysis in support of the SciDAC QCD code base as it evolved and as it moved to new computation platforms. In support of the performance activities, performance data was to be collected in a database for the purpose of broader analysis. Third, the UNC work was done at RENCI (Renaissance Computing Institute), which has extensive expertise and facilities for scientific data visualization, so we acted in an ongoing consulting and support role in that area.

  19. A Fault Oblivious Extreme-Scale Execution Environment

    SciTech Connect (OSTI)

    McKie, Jim

    2014-11-20

    The FOX project, funded under the ASCR X-stack I program, developed systems software and runtime libraries for a new approach to the data and work distribution for massively parallel, fault oblivious application execution. Our work was motivated by the premise that exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today’s machines. To deliver the capability of exascale hardware, the systems software must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. Our OS research has prototyped new methods to provide efficient resource sharing, synchronization, and protection in a many-core compute node. We have experimented with alternative task/dataflow programming models and shown scalability in some cases to hundreds of thousands of cores. Much of our software is in active development through open source projects. Concepts from FOX are being pursued in next generation exascale operating systems. Our OS work focused on adaptive, application tailored OS services optimized for multi → many core processors. We developed a new operating system NIX that supports role-based allocation of cores to processes which was released to open source. We contributed to the IBM FusedOS project, which promoted the concept of latency-optimized and throughput-optimized cores. We built a task queue library based on distributed, fault tolerant key-value store and identified scaling issues. A second fault tolerant task parallel library was developed, based on the Linda tuple space model, that used low level interconnect primitives for optimized communication. We designed fault tolerance mechanisms for task parallel computations employing work stealing for load balancing that scaled to the largest existing supercomputers. Finally, we implemented the Elastic Building Blocks runtime, a library to manage object-oriented distributed software components. To support the research, we won two INCITE awards for time on Intrepid (BG/P) and Mira (BG/Q). Much of our work has had impact in the OS and runtime community through the ASCR Exascale OS/R workshop and report, leading to the research agenda of the Exascale OS/R program. Our project was, however, also affected by attrition of multiple PIs. While the PIs continued to participate and offer guidance as time permitted, losing these key individuals was unfortunate both for the project and for the DOE HPC community.

  20. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    SciTech Connect (OSTI)

    Shimojo, Fuyuki; Hattori, Shinnosuke [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Department of Physics, Kumamoto University, Kumamoto 860-8555 (Japan); Kalia, Rajiv K.; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Rajak, Pankaj; Vashishta, Priya [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States)] [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Kunaseth, Manaschai [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); National Nanotechnology Center, Pathumthani 12120 (Thailand); Ohmura, Satoshi [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Department of Physics, Kumamoto University, Kumamoto 860-8555 (Japan); Department of Physics, Kyoto University, Kyoto 606-8502 (Japan); Shimamura, Kohei [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Department of Physics, Kumamoto University, Kumamoto 860-8555 (Japan); Department of Applied Quantum Physics and Nuclear Engineering, Kyushu University, Fukuoka 819-0395 (Japan)

    2014-05-14

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786?432 cores for a 50.3 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16?661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques are employed for efficiently calculating the long-range exact exchange correction and excited-state forces. The NAQMD trajectories are analyzed to extract the rates of various excitonic processes, which are then used in KMC simulation to study the dynamics of the global exciton flow network. This has allowed the study of large-scale photoexcitation dynamics in 6400-atom amorphous molecular solid, reaching the experimental time scales.

  1. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    SciTech Connect (OSTI)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C.

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms. The Data Analytics and Visualization Team lends expertise in tools and methods for high-performance, post-processing of large datasets, interactive data exploration, batch visualization, and production visualization. The Operations Team ensures that system hardware and software work reliably and optimally; system tools are matched to the unique system architectures and scale of ALCF resources; the entire system software stack works smoothly together; and I/O performance issues, bug fixes, and requests for system software are addressed. The User Services and Outreach Team offers frontline services and support to existing and potential ALCF users. The team also provides marketing and outreach to users, DOE, and the broader community.

  2. Towards Energy-Centric Computing and Computer Architecture

    SciTech Connect (OSTI)

    2011-02-09

    Technology forecasts indicate that device scaling will continue well into the next decade. Unfortunately, it is becoming extremely difficult to harness this increase in the number of transistorsinto performance due to a number of technological, circuit, architectural, methodological and programming challenges.In this talk, I will argue that the key emerging showstopper is power. Voltage scaling as a means to maintain a constant power envelope with an increase in transistor numbers is hitting diminishing returns. As such, to continue riding the Moore's law we need to look for drastic measures to cut power. This is definitely the case for server chips in future datacenters,where abundant server parallelism, redundancy and 3D chip integration are likely to remove programming, reliability and bandwidth hurdles, leaving power as the only true limiter.I will present results backing this argument based on validated models for future server chips and parameters extracted from real commercial workloads. Then I use these results to project future research directions for datacenter hardware and software.About the speakerBabak Falsafi is a Professor in the School of Computer and Communication Sciences at EPFL, and an Adjunct Professor of Electrical and Computer Engineering and Computer Science at Carnegie Mellon. He is thefounder and the director ofthe Parallel Systems Architecture Laboratory (PARSA) at EPFL where he conducts research onarchitectural support for parallel programming, resilient systems, architectures to break the memory wall, and analytic and simulation tools for computer system performance evaluation.In 1999, in collaboration with T. N. Vijaykumar he showed for the first time that, contrary to conventional wisdom,multiprocessors do not needrelaxed memory consistency models (and the resulting convoluted programming interfaces found and used in modern systems) to achieve high performance. He is a recipient of an NSF CAREER award in 2000, IBM Faculty Partnership Awards between 2001 and 2004, and an Alfred P. Sloan Research Fellowship in 2004. He is a senior member of IEEE and ACM.

  3. Eclipse Parallel Tools Platform

    Energy Science and Technology Software Center (OSTI)

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures, and basis Fortran integration. Future versions will extend the functionality substantially, provide a number of core parallel tools, and provide support across a wide rang of parallel architectures and languages.« less

  4. PROCEEDINGS OF THE RIKEN BNL RESEARCH CENTER WORKSHOP ON LARGE SCALE COMPUTATIONS IN NUCLEAR PHYSICS USING THE QCDOC, SEPTEMBER 26 - 28, 2002.

    SciTech Connect (OSTI)

    AOKI,Y.; BALTZ,A.; CREUTZ,M.; GYULASSY,M.; OHTA,S.

    2002-09-26

    The massively parallel computer QCDOC (QCD On a Chip) of the RIKEN BNL Research Center (RI3RC) will provide ten-teraflop peak performance for lattice gauge calculations. Lattice groups from both Columbia University and RBRC, along with assistance from IBM, jointly handled the design of the QCDOC. RIKEN has provided $5 million in funding to complete the machine in 2003. Some fraction of this computer (perhaps as much as 10%) might be made available for large-scale computations in areas of theoretical nuclear physics other than lattice gauge theory. The purpose of this workshop was to investigate the feasibility and possibility of using a supercomputer such as the QCDOC for lattice, general nuclear theory, and other calculations. The lattice applications to nuclear physics that can be investigated with the QCDOC are varied: for example, the light hadron spectrum, finite temperature QCD, and kaon ({Delta}I = 1/2 and CP violation), and nucleon (the structure of the proton) matrix elements, to name a few. There are also other topics in theoretical nuclear physics that are currently limited by computer resources. Among these are ab initio calculations of nuclear structure for light nuclei (e.g. up to {approx}A = 8 nuclei), nuclear shell model calculations, nuclear hydrodynamics, heavy ion cascade and other transport calculations for RHIC, and nuclear astrophysics topics such as exploding supernovae. The physics topics were quite varied, ranging from simulations of stellar collapse by Douglas Swesty to detailed shell model calculations by David Dean, Takaharu Otsuka, and Noritaka Shimizu. Going outside traditional nuclear physics, James Davenport discussed molecular dynamics simulations and Shailesh Chandrasekharan presented a class of algorithms for simulating a wide variety of femionic problems. Four speakers addressed various aspects of theory and computational modeling for relativistic heavy ion reactions at RHIC. Scott Pratt and Steffen Bass gave general overviews of how qualitatively different types of physical processes evolve temporally in heavy ion reactions. Denes Molnar concentrated on the application of hydrodynamics, and Alex Krasnitz on a classical Yang-Mills field theory for the initial phase. We were pleasantly surprised by the excellence of the talks and the substantial interest from all parties. The diversity of the audience forced the speakers to give their talks at an understandable level, which was highly appreciated. One particular bonus of the discussions could be the application of highly developed three-dimensional astrophysics hydrodynamics codes to heavy ion reactions.

  5. Fundamental Mechanisms Driving the Amorphous to Crystalline Phase Transformation

    SciTech Connect (OSTI)

    Reed, B W; Browning, N D; Santala, M K; LaGrange, T; Gilmer, G H; Masiel, D J; Campbell, G H; Raoux, S; Topuria, T; Meister, S; Cui, Y

    2011-01-04

    Phase transformations are ubiquitous, fundamental phenomena that lie at the heart of many structural, optical and electronic properties in condensed matter physics and materials science. Many transformations, especially those occurring under extreme conditions such as rapid changes in the thermodynamic state, are controlled by poorly understood processes involving the nucleation and quenching of metastable phases. Typically these processes occur on time and length scales invisible to most experimental techniques ({micro}s and faster, nm and smaller), so our understanding of the dynamics tends to be very limited and indirect, often relying on simulations combined with experimental study of the ''time infinity'' end state. Experimental techniques that can directly probe phase transformations on their proper time and length scales are therefore key to providing fundamental insights into the whole area of transformation physics and materials science. LLNL possesses a unique dynamic transmission electron microscope (DTEM) capable of taking images and diffraction patterns of laser-driven material processes with resolution measured in nanometers and nanoseconds. The DTEM has previously used time-resolved diffraction patterns to quantitatively study phase transformations that are orders of magnitude too fast for conventional in situ TEM. More recently the microscope has demonstrated the ability to directly image a reaction front moving at {approx}13 nm/ns and the nucleation of a new phase behind that front. Certain compound semiconductor phase change materials, such as Ge{sub 2}Sb{sub 2}Te{sub 5} (GST), Sb{sub 2}Te and GeSb, exhibit a technologically important series of transformations on scales that fall neatly into the performance specifications of the DTEM. If a small portion of such material is heated above its melting point and then rapidly cooled, it quenches into an amorphous state. Heating again with a less intense pulse leads to recrystallization into a vacancy-stabilized metastable rock salt structure. Each transformation takes {approx}10-100 ns, and the cycle can be driven repeatedly a very large number of times with a nanosecond laser such as the DTEM's sample drive laser. These materials are widely used in optical storage devices such as rewritable CDs and DVDs, and they are also applied in a novel solid state memory technology - phase change memory (PCM). PCM has the potential to produce nonvolatile memory systems with high speed, extreme density, and very low power requirements. For PCM applications several materials properties are of great importance: the resistivities of both phases, the crystallization temperature, the melting point, the crystallization speed, reversibility (number of phase-transformation cycles without degradation) and stability against crystallization at elevated temperature. For a viable technology, all these properties need to have good scaling behavior, as dimensions of the memory cells will shrink with every generation. In this LDRD project, we used the unique single-shot nanosecond in situ experimentation capabilities of the DTEM to watch these transformations in GST on the time and length scales most relevant for device applications. Interpretation of the results was performed in conjunction with atomistic and finite-element computations. Samples were provided by collaborators at IBM and Stanford University. We observed, and measured the kinetics of, the amorphous-crystalline and melting-solidification transitions in uniform thin-film samples. Above a certain threshold, the crystal nucleation rate was found to be enormously high (with many nuclei appearing per cubic {micro}m even after nanosecond-scale incubation times), in agreement with atomistic simulation and consistent with an extremely low nucleation barrier. We developed data reduction techniques based on principal component analysis (PCA), revealing the complex, multi-dimensional evolution of the material while suppressing noise and irrelevant information. Using a novel specimen geometry, we also achieved repeated switching betw

  6. Translational Genomics for the Improvement of Switchgrass

    SciTech Connect (OSTI)

    Carpita, Nicholas; McCann, Maureen

    2014-05-07

    Our objectives were to apply bioinformatics and high throughput sequencing technologies to identify and classify the genes involved in cell wall formation in maize and switchgrass. Targets for genetic modification were to be identified and cell wall materials isolated and assayed for enhanced performance in bioprocessing. We annotated and assembled over 750 maize genes into gene families predicted to function in cell wall biogenesis. Comparative genomics of maize, rice, and Arabidopsis sequences revealed differences in gene family structure. In addition, differences in expression between gene family members of Arabidopsis, maize and rice underscored the need for a grass-specific genetic model for functional analyses. A forward screen of mature leaves of field-grown maize lines by near-infrared spectroscopy yielded several dozen lines with heritable spectroscopic phenotypes, several of which near-infrared (nir) mutants had altered carbohydrate-lignin compositions. Our contributions to the maize genome sequencing effort built on knowledge of copy number variation showing that uneven gene losses between duplicated regions were involved in returning an ancient allotetraploid to a genetically diploid state. For example, although about 25% of all duplicated genes remain genome-wide, all of the cellulose synthase (CesA) homologs were retained. We showed that guaiacyl and syringyl lignin in lignocellulosic cell-wall materials from stems demonstrate a two-fold natural variation in content across a population of maize Intermated B73 x Mo7 (IBM) recombinant inbred lines, a maize Association Panel of 282 inbreds and landraces, and three populations of the maize Nested Association Mapping (NAM) recombinant inbred lines grown in three years. We then defined quantitative trait loci (QTL) for stem lignin content measured using pyrolysis molecular-beam mass spectrometry, and glucose and xylose yield measured using an enzymatic hydrolysis assay. Among five multi-year QTL for lignin abundance, two for 4-vinylphenol abundance, and four for glucose and/or xylose yield, not a single QTL for aromatic abundance and sugar yield was shared. A genome-wide association study (GWAS) for lignin abundance and sugar yield of the 282-member maize Association Panel provided candidate genes in the eleven QTL and showed that many other alleles impacting these traits exist in the broader pool of maize genetic diversity. The maize B73 and Mo17 genotypes exhibited surprisingly large differences in gene expression in developing stem tissues, suggesting certain regulatory elements can significantly enhance activity of biomass synthesis pathways. Candidate genes, identified by GWAS or by differential expression, include genes of cell-wall metabolism, transcription factors associated with vascularization and fiber formation, and components of cellular signaling pathways. Our work provides new insights and strategies beyond modification of lignin to enhance yields of biofuels from genetically tailored biomass.

  7. A Measurement Management Technology for Improving Energy Efficiency in Data Centers and Telecommunication Facilities

    SciTech Connect (OSTI)

    Hendrik Hamann, Levente Klein

    2012-06-28

    Data center (DC) electricity use is increasing at an annual rate of over 20% and presents a concern for the Information Technology (IT) industry, governments, and the society. A large fraction of the energy use is consumed by the compressor cooling to maintain the recommended operating conditions for IT equipment. The most common way to improve the DC efficiency is achieved by optimally provisioning the cooling power to match the global heat dissipation in the DC. However, at a more granular level, the large range of heat densities of today's IT equipment makes the task of provisioning cooling power optimized to the level of individual computer room air conditioning (CRAC) units much more challenging. Distributed sensing within a DC enables the development of new strategies to improve energy efficiency, such as hot spot elimination through targeted cooling, matching power consumption at rack level with workload schedule, and minimizing power losses. The scope of Measurement and Management Technologies (MMT) is to develop a software tool and the underlying sensing technology to provide critical decision support and control for DC and telecommunication facilities (TF) operations. A key aspect of MMT technology is integration of modeling tools to understand how changes in one operational parameter affect the overall DC response. It is demonstrated that reduced ordered models for DC can generate, in less than 2 seconds computational time, a three dimensional thermal model in a 50 kft{sup 2} DC. This rapid modeling enables real time visualization of the DC conditions and enables 'what if' scenarios simulations to characterize response to 'disturbances'. One such example is thermal zone modeling that matches the cooling power to the heat generated at a local level by identifying DC zones cooled by a specific CRAC. Turning off a CRAC unit can be simulated to understand how the other CRAC utilization changes and how server temperature responds. Several new sensing technologies were added to the existing MMT platform: (1) air contamination (corrosion) sensors, (2) power monitoring, and (3) a wireless environmental sensing network. All three technologies are built on cost effective sensing solutions that increase the density of sensing points and enable high resolution mapping of DCs. The wireless sensing solution enables Air Conditioning Unit (ACU) control while the corrosion sensor enables air side economization and can quantify the risk of IT equipment failure due to air contamination. Validation data for six test sites demonstrate that leveraging MMT energy efficiency solutions combined with industry best practices results in an average of 20% reduction in cooling energy, without major infrastructure upgrades. As an illustration of the unique MMT capabilities, a data center infrastructure efficiency (DCIE) of 87% (industry best operation) was achieved. The technology is commercialized through IBM System and Technology Lab Services that offers MMT as a solution to improve DC energy efficiency. Estimation indicates that deploying MMT in existing DCs can results in an 8 billion kWh savings and projection indicates that constant adoption of MMT can results in obtainable savings of 44 billion kWh in 2035. Negotiations are under way with business partners to commercialize/license the ACU control technology and the new sensor solutions (corrosion and power sensing) to enable third party vendors and developers to leverage the energy efficiency solutions.

  8. Greenhouse Gas Mitigation Options in ISEEM Global Energy Model: 2010-2050 Scenario Analysis for Least-Cost Carbon Reduction in Iron and Steel Sector

    SciTech Connect (OSTI)

    Karali, Nihan; Xu, Tengfang; Sathaye, Jayant

    2013-12-01

    The goal of the modeling work carried out in this project was to quantify long-term scenarios for the future emission reduction potentials in the iron and steel sector. The main focus of the project is to examine the impacts of carbon reduction options in the U.S. iron and steel sector under a set of selected scenarios. In order to advance the understanding of carbon emission reduction potential on the national and global scales, and to evaluate the regional impacts of potential U.S. mitigation strategies (e.g., commodity and carbon trading), we also included and examined the carbon reduction scenarios in China’s and India’s iron and steel sectors in this project. For this purpose, a new bottom-up energy modeling framework, the Industrial Sector Energy Efficiency Modeling (ISEEM), (Karali et al. 2012) was used to provide detailed annual projections starting from 2010 through 2050. We used the ISEEM modeling framework to carry out detailed analysis, on a country-by-country basis, for the U.S., China’s, and India’s iron and steel sectors. The ISEEM model applicable to iron and steel section, called ISEEM-IS, is developed to estimate and evaluate carbon emissions scenarios under several alternative mitigation options - including policies (e.g., carbon caps), commodity trading, and carbon trading. The projections will help us to better understand emission reduction potentials with technological and economic implications. The database for input of ISEEM-IS model consists of data and information compiled from various resources such as World Steel Association (WSA), the U.S. Geological Survey (USGS), China Steel Year Books, India Bureau of Mines (IBM), Energy Information Administration (EIA), and recent LBNL studies on bottom-up techno-economic analysis of energy efficiency measures in the iron and steel sector of the U.S., China, and India, including long-term steel production in China. In the ISEEM-IS model, production technology and manufacturing details are represented, in addition to the extensive data compiled from recent studies on bottom-up representation of efficiency measures for the sector. We also defined various mitigation scenarios including long-term production trends to project country-specific production, energy use, trading, carbon emissions, and costs of mitigation. Such analyses can provide useful information to assist policy-makers when considering and shaping future emissions mitigation strategies and policies. The technical objective is to analyze the costs of production and CO{sub 2} emission reduction in the U.S, China, and India’s iron and steel sectors under different emission reduction scenarios, using the ISEEM-IS as a cost optimization model. The scenarios included in this project correspond to various CO{sub 2} emission reduction targets for the iron and steel sector under different strategies such as simple CO{sub 2} emission caps (e.g., specific reduction goals), emission reduction via commodity trading, and emission reduction via carbon trading.

  9. Recovery Act: Integrated DC-DC Conversion for Energy-Efficient Multicore Processors

    SciTech Connect (OSTI)

    Shepard, Kenneth L

    2013-03-31

    In this project, we have developed the use of thin-film magnetic materials to improve in energy efficiency of digital computing applications by enabling integrated dc-dc power conversion and management with on-chip power inductors. Integrated voltage regulators also enables fine-grained power management, by providing dynamic scaling of the supply voltage in concert with the clock frequency of synchronous logic to throttle power consumption at periods of low computational demand. The voltage converter generates lower output voltages during periods of low computational performance requirements and higher output voltages during periods of high computational performance requirements. Implementation of integrated power conversion requires high-capacity energy storage devices, which are generally not available in traditional semiconductor processes. We achieve this with integration of thin-film magnetic materials into a conventional complementary metal-oxide-semiconductor (CMOS) process for high-quality on-chip power inductors. This project includes a body of work conducted to develop integrated switch-mode voltage regulators with thin-film magnetic power inductors. Soft-magnetic materials and inductor topologies are selected and optimized, with intent to maximize efficiency and current density of the integrated regulators. A custom integrated circuit (IC) is designed and fabricated in 45-nm CMOS silicon-on-insulator (SOI) to provide the control system and power-train necessary to drive the power inductors, in addition to providing a digital load for the converter. A silicon interposer is designed and fabricated in collaboration with IBM Research to integrate custom power inductors by chip stacking with the 45-nm CMOS integrated circuit, enabling power conversion with current density greater than 10A/mm2. The concepts and designs developed from this work enable significant improvements in performance-per-watt of future microprocessors in servers, desktops, and mobile devices. These new approaches to scaled voltage regulation for computing devices also promise significant impact on electricity consumption in the United States and abroad by improving the efficiency of all computational platforms. In 2006, servers and datacenters in the United States consumed an estimated 61 billion kWh or about 1.5% of the nation's total energy consumption. Federal Government servers and data centers alone accounted for about 10 billion kWh, for a total annual energy cost of about $450 million. Based upon market growth and efficiency trends, estimates place current server and datacenter power consumption at nearly 85 billion kWh in the US and at almost 280 billion kWh worldwide. Similar estimates place national desktop, mobile and portable computing at 80 billion kWh combined. While national electricity utilization for computation amounts to only 4% of current usage, it is growing at a rate of about 10% a year with volume servers representing one of the largest growth segments due to the increasing utilization of cloud-based services. The percentage of power that is consumed by the processor in a server varies but can be as much as 30% of the total power utilization, with an additional 50% associated with heat removal. The approaches considered here should allow energy efficiency gains as high as 30% in processors for all computing platforms, from high-end servers to smart phones, resulting in a direct annual energy savings of almost 15 billion kWh nationally, and 50 billion kWh globally. The work developed here is being commercialized by the start-up venture, Ferric Semiconductor, which has already secured two Phase I SBIR grants to bring these technologies to the marketplace.

  10. Final Report: Phase II Nevada Water Resources Data, Modeling, and Visualization (DMV) Center

    SciTech Connect (OSTI)

    Jackman, Thomas; Minor, Timothy; Pohll, Gregory

    2013-07-22

    Water is unquestionably a critical resource throughout the United States. In the semi-arid west -- an area stressed by increase in human population and sprawl of the built environment -- water is the most important limiting resource. Crucially, science must understand factors that affect availability and distribution of water. To sustain growing consumptive demand, science needs to translate understanding into reliable and robust predictions of availability under weather conditions that could be average but might be extreme. These predictions are needed to support current and long-term planning. Similar to the role of weather forecast and climate prediction, water prediction over short and long temporal scales can contribute to resource strategy, governmental policy and municipal infrastructure decisions, which are arguably tied to the natural variability and unnatural change to climate. Change in seasonal and annual temperature, precipitation, snowmelt, and runoff affect the distribution of water over large temporal and spatial scales, which impact the risk of flooding and the groundwater recharge. Anthropogenic influences and impacts increase the complexity and urgency of the challenge. The goal of this project has been to develop a decision support framework of data acquisition, digital modeling, and 3D visualization. This integrated framework consists of tools for compiling, discovering and projecting our understanding of processes that control the availability and distribution of water. The framework is intended to support the analysis of the complex interactions between processes that affect water supply, from controlled availability to either scarcity or deluge. The developed framework enables DRI to promote excellence in water resource management, particularly within the Lake Tahoe basin. In principle, this framework could be replicated for other watersheds throughout the United States. Phase II of this project builds upon the research conducted during Phase I, in which the hydrologic framework was investigated and the development initiated. Phase II concentrates on practical implementation of the earlier work but emphasizes applications to the hydrology of the Lake Tahoe basin. Phase 1 efforts have been refined and extended by creating a toolset for geographic information systems (GIS) that is usable for disparate types of geospatial and geo-referenced data. The toolset is intended to serve multiple users for a variety of applications. The web portal for internet access to hydrologic and remotely sensed product data, prototyped in Phase I, has been significantly enhanced. The portal provides high performance access to LANDSAT-derived data using techniques developed during the course of the project. The portal is interactive, and supports the geo-referenced display of hydrologic information derived from remotely sensed data, such as various vegetative indices used to calculate water consumption. The platform can serve both internal and external constituencies using inter-operating infrastructure that spans both sides of the DRI firewall. The platform is intended grow its supported data assets and to serve as a template for replication to other geographic areas. An unanticipated development during the project was the use of ArcGIS software on a new computer system, called the IBM PureSytems, and the parallel use of the systems for faster, more efficient image processing. Additional data, independent of the portal, was collected within the Sagehen basin and provides detailed information regarding the processes that control hydrologic responses within mountain watersheds. The newly collected data include elevation, evapotranspiration, energy balance and remotely sensed snow-pack data. A Lake Tahoe basin hydrologic model has been developed, in part to help predict the hydrologic impacts of climate change. The model couples both the surface and subsurface hydrology, with the two components having been independently calibrated. Results from the coupled simulations involving both surface water and groundwater processes show that it is possible to fairly accurately simulate lake effects and water budget variables over a wide range of dry and wet cycles in the historical record. The Lake Tahoe basin is representative of the hydrology, topography and climate throughout the Sierra Nevada Range, and the entire model development is prototypical of the efforts required to replicate the decision support framework to other locales. The Lake Tahoe model in particular, could allow water managers to evaluate more accurately components of the water budget (ET, runoff, groundwater, etc) and to answer important questions regarding water resources in northern Nevada. This report discusses the geographic scale and the hydrologic complexity of the calibrated model developed as part of this project, as well as simulation results for historical and future climate projects To enable human-driven data exploration and discovery, de novo software for a globalized rendering module that extends the capability of our evolving custom visualization engine from Phase I (called SMEngine) has been developed. The new rendering component, called Horizon, supports terrain rendering capable of displaying and interrogating both remotely sensed and modeled data. The development of Horizon necessitated adaptation of the visualization engine to allow extensible integration of components such as the global rendering module and support for associated features. The resulting software is general in its GIS capability, but a specific Lake Tahoe visualization application suitable for immersive decision support in the DRIVE6 virtual reality facility has been developed. During the development, various features to enhance the value of the visualization experience were explored, including the use of hyperspectral image overlays. An over-arching goal of the visualization aspect of the project has been to develop and demonstrate the CAVE (CAVE Automatic Virtual Environment) as a practical tool for hydrologic research.

  11. The CO-OP Guide

    SciTech Connect (OSTI)

    Michael, J.; /Fermilab

    1991-08-16

    You are at D0, the newest and most advanced experiment at Fermilab. Its goal is to find the 'top quark', nicknamed 'truth'. theoretically one of the six fundamental building blocks of matter. Combinations of the six quarks are said to make up electrons, protons and neutrons. Your group at D0 is the cryogenic division. Its goal is to provide and maintain a cryogenic system which ultimately supplies and controls the liquid argon used in the giant cryostats for the experiment. The high purity liquid argon is needed to keep the detector modules inside the cryostats cold, so that they will operate properly. Your job at D0 is to be a co-op for the research and development group of the cryogenics division. Your goals are dependent on the needs of the cryo group. D0 is where you will spend most of your time. The co-op office is located on what is known as the 3rd floor, but is actually on the ground floor. The floor directly above the 3rd floor is the 5th floor, which contains your immediate superiors and the D0 secretary. The 6th and top floor is above that, and contains the D0 secretary for official and important business. On the other side of the D0 assembly building is the cryo control room. This is where the cryogenic piping system is remotely monitored and controlled. Other important sites at D0 include the trailer city on the north parking lot, which has the D0 secretary who handles all the payroll matters (among other duties), and the portakamp in the south parking lot. Besides D0, which is named for its location on the particle accelerator ring. the most important place is Wilson Hall. That is the large building shaped like a big Atact symbol. It contains various important people such as the safety group. the personnel department (which you have already encountered. being hired), the minor stock room, the cafeteria, the Fermi library. Ramsey Auditorium. etc. Behind Wilson Hall is the Booster Ring, which accelerates particles before they are injected into the main ring. Inside the booster ring are the East and West Booster towers, which contain cryogenic support groups. The D0 cryo group offices used to be in the West Booster Portakamps. Away from Wilson Hall, there are various buildings strewn about the Fermilab property that have important functional uses to D0. One such example is Lab A. This is where the now unused bubble chamber resides. which was used to take pictures of particle motion. Many of our group is from the bubble chamber, and occasionally stories from the 'bubble chamber days' can be heard as someone waxes nostalgic. Lab A has a machine shop and many technicians. All three of the cryostats used in the D0 experiment went through Lab A for preparation and installation work. Lab A is located directly up the road from the front of Wilson Hall (north-east). Its unmistakable dark geodesic dome makes it easy to find. The Feynman Computer building, located east and just a little bit north of Wilson Hall, houses the computer repair people. If any of the computers used in our group crash and burn, we must take them to the third floor of Feynman to be fixed or exchanged. On one side is the Prep department, which handles the VAX mainframe computers, and on the other is personal computer repair, which handles Fermi Macs and IBMs. Directly north of Wilson Hall is Site 38. This site is the location of many important Fermilab facilities, such as the Fermi fire department, the carpenter's shop, the Fermi gas pumps, the main stock room, and shipping and receiving. Lastly, but perhaps most significantly, is the Fermilab Village. In addition to the machine shops, the cut shop, welding facilities, and the garishly painted physicist dorms, there are such things as a gym, a pool and other facilities to take the edge off a weary mind. The village is located just north off Batavia road on the east side of Fermilab. The village barn is the first and most notable building as one approaches.

  12. A Novel Coarsening Method for Scalable and Efficient Mesh Generation

    SciTech Connect (OSTI)

    Yoo, A; Hysom, D; Gunney, B

    2010-12-02

    In this paper, we propose a novel mesh coarsening method called brick coarsening method. The proposed method can be used in conjunction with any graph partitioners and scales to very large meshes. This method reduces problem space by decomposing the original mesh into fixed-size blocks of nodes called bricks, layered in a similar way to conventional brick laying, and then assigning each node of the original mesh to appropriate brick. Our experiments indicate that the proposed method scales to very large meshes while allowing simple RCB partitioner to produce higher-quality partitions with significantly less edge cuts. Our results further indicate that the proposed brick-coarsening method allows more complicated partitioners like PT-Scotch to scale to very large problem size while still maintaining good partitioning performance with relatively good edge-cut metric. Graph partitioning is an important problem that has many scientific and engineering applications in such areas as VLSI design, scientific computing, and resource management. Given a graph G = (V,E), where V is the set of vertices and E is the set of edges, (k-way) graph partitioning problem is to partition the vertices of the graph (V) into k disjoint groups such that each group contains roughly equal number of vertices and the number of edges connecting vertices in different groups is minimized. Graph partitioning plays a key role in large scientific computing, especially in mesh-based computations, as it is used as a tool to minimize the volume of communication and to ensure well-balanced load across computing nodes. The impact of graph partitioning on the reduction of communication can be easily seen, for example, in different iterative methods to solve a sparse system of linear equation. Here, a graph partitioning technique is applied to the matrix, which is basically a graph in which each edge is a non-zero entry in the matrix, to allocate groups of vertices to processors in such a way that many of matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size blocks called bricks. These brick are then laid in a way similar to conventional brick layin