National Library of Energy BETA

Sample records for abaqus computer program

  1. Visualizing MCNP Tally Segment Geometry and Coupling Results with ABAQUS

    SciTech Connect (OSTI)

    J. R. Parry; J. A. Galbraith

    2007-11-01

    The Advanced Graphite Creep test, AGC-1, is planned for irradiation in the Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant program. The experiment requires very detailed neutronics and thermal hydraulics analyses to show compliance with programmatic and ATR safety requirements. The MCNP model used for the neutronics analysis required hundreds of tally regions to provide the desired detail. A method for visualizing the hundreds of tally region geometries and the tally region results in 3 dimensions has been created to support the AGC-1 irradiation. Additionally, a method was created which would allow ABAQUS to access the results directly for the thermal analysis of the AGC-1 experiment.

  2. Developing an Abaqus *HYPERFOAM Model for M9747 (4003047) Cellular Silicone Foam

    SciTech Connect (OSTI)

    Siranosian, Antranik A.; Stevens, R. Robert

    2012-04-26

    This report documents work done to develop an Abaqus *HYPERFOAM hyperelastic model for M9747 (4003047) cellular silicone foam for use in quasi-static analyses at ambient temperature. Experimental data, from acceptance tests for 'Pad A' conducted at the Kansas City Plant (KCP), was used to calibrate the model. The data includes gap (relative displacement) and load measurements from three locations on the pad. Thirteen sets of data, from pads with different serial numbers, were provided. The thirty-nine gap-load curves were extracted from the thirteen supplied Excel spreadsheets and analyzed, and from those thirty-nine one set of data, representing a qualitative mean, was chosen to calibrate the model. The data was converted from gap and load to nominal (engineering) strain and nominal stress in order to implement it in Abaqus. Strain computations required initial pad thickness estimates. An Abaqus model of a right-circular cylinder was used to evaluate and calibrate the *HYPERFOAM model.

  3. Programs | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    INCITE Program ALCC Program Director's Discretionary (DD) Program ALCF Data Science Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Featured Science Simulation of cosmic reionization Cosmic Reionization On Computers Nickolay Gnedin Allocation Program: INCITE Allocation Hours: 65 Million Addressing Challenges As a DOE Office of Science User Facility dedicated to open science, any

  4. Enhancing the ABAQUS thermomechanics code to simulate multipellet steady and transient LWR fuel rod behavior

    SciTech Connect (OSTI)

    R. L. Williamson

    2011-08-01

    A powerful multidimensional fuels performance analysis capability, applicable to both steady and transient fuel behavior, is developed based on enhancements to the commercially available ABAQUS general-purpose thermomechanics code. Enhanced capabilities are described, including: UO2 temperature and burnup dependent thermal properties, solid and gaseous fission product swelling, fuel densification, fission gas release, cladding thermal and irradiation creep, cladding irradiation growth, gap heat transfer, and gap/plenum gas behavior during irradiation. This new capability is demonstrated using a 2D axisymmetric analysis of the upper section of a simplified multipellet fuel rod, during both steady and transient operation. Comparisons are made between discrete and smeared-pellet simulations. Computational results demonstrate the importance of a multidimensional, multipellet, fully-coupled thermomechanical approach. Interestingly, many of the inherent deficiencies in existing fuel performance codes (e.g., 1D thermomechanics, loose thermomechanical coupling, separate steady and transient analysis, cumbersome pre- and post-processing) are, in fact, ABAQUS strengths.

  5. Enhancing the ABAQUS Thermomechanics Code to Simulate Steady and Transient Fuel Rod Behavior

    SciTech Connect (OSTI)

    R. L. Williamson; D. A. Knoll

    2009-09-01

    A powerful multidimensional fuels performance capability, applicable to both steady and transient fuel behavior, is developed based on enhancements to the commercially available ABAQUS general-purpose thermomechanics code. Enhanced capabilities are described, including: UO2 temperature and burnup dependent thermal properties, solid and gaseous fission product swelling, fuel densification, fission gas release, cladding thermal and irradiation creep, cladding irradiation growth , gap heat transfer, and gap/plenum gas behavior during irradiation. The various modeling capabilities are demonstrated using a 2D axisymmetric analysis of the upper section of a simplified multi-pellet fuel rod, during both steady and transient operation. Computational results demonstrate the importance of a multidimensional fully-coupled thermomechanics treatment. Interestingly, many of the inherent deficiencies in existing fuel performance codes (e.g., 1D thermomechanics, loose thermo-mechanical coupling, separate steady and transient analysis, cumbersome pre- and post-processing) are, in fact, ABAQUS strengths.

  6. Advanced Simulation and Computing Program

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Advanced Simulation and Computing (ASC) Program Unstable intermixing of heavy (sulfur hexafluoride) and light fluid (air). Show Caption Turbulence generated by unstable fluid flow. Show Caption Examining the effects of a one-megaton nuclear energy source detonated on the surface of an asteroid. Show Caption Los Alamos National Laboratory is home to two of the world's most powerful supercomputers, each capable of performing more than 1,000 trillion operations per second. The newer one, Cielo, was

  7. INCITE Program | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Science at ALCF Allocation Programs INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program ALCF Data Science Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations INCITE Program Innovative and Novel Computational Impact on Theory and Experiment (INCITE) Program The INCITE program provides allocations to

  8. Parallel Programming with MPI | Argonne Leadership Computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel Programming with MPI Event Sponsor: Mathematics and Computer Science Division ...permalinksargonne16mpi.php The Mathematics and Computer Science division of ...

  9. Radiological Safety Analysis Computer Program

    Energy Science and Technology Software Center (OSTI)

    2001-08-28

    RSAC-6 is the latest version of the RSAC program. It calculates the consequences of a release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory; decay and in-grow the inventory during transport through processes, facilities, and the environment; model the downwind dispersion of the activity; and calculate doses to downwind individuals. Internal dose from the inhalation and ingestion pathways is calculated. External dose from ground surface andmore » plume gamma pathways is calculated. New and exciting updates to the program include the ability to evaluate a release to an enclosed room, resuspension of deposited activity and evaluation of a release up to 1 meter from the release point. Enhanced tools are included for dry deposition, building wake, occupancy factors, respirable fraction, AMAD adjustment, updated and enhanced radionuclide inventory and inclusion of the dose-conversion factors from FOR 11 and 12.« less

  10. Director's Discretionary (DD) Program | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility Science at ALCF Allocation Programs INCITE Program ALCC Program Director's Discretionary (DD) Program ALCF Data Science Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Director's Discretionary (DD) Program The ALCF's DD program provides "start up" awards to researchers working toward an INCITE or ALCC allocation to help them achieve computational readiness. Projects

  11. Debugging a high performance computing program

    DOE Patents [OSTI]

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  12. Debugging a high performance computing program

    DOE Patents [OSTI]

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  13. ALCC Program | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations ALCC Program ASCR Leadership Computing Challenge (ALCC) Program The ALCC program allocates resources to projects with an emphasis on high-risk, high-payoff simulations in areas directly related to the DOE mission and for broadening the community of researchers capable of using leadership computing resources. The DOE conducts a peer review of all

  14. Calibrating the Abaqus Crushable Foam Material Model using UNM Data

    SciTech Connect (OSTI)

    Schembri, Philip E.; Lewis, Matthew W.

    2014-02-27

    Triaxial test data from the University of New Mexico and uniaxial test data from W-14 is used to calibrate the Abaqus crushable foam material model to represent the syntactic foam comprised of APO-BMI matrix and carbon microballoons used in the W76. The material model is an elasto-plasticity model in which the yield strength depends on pressure. Both the elastic properties and the yield stress are estimated by fitting a line to the elastic region of each test response. The model parameters are fit to the data (in a non-rigorous way) to provide both a conservative and not-conservative material model. The model is verified to perform as intended by comparing the values of pressure and shear stress at yield, as well as the shear and volumetric stress-strain response, to the test data.

  15. ADP computer security classification program

    SciTech Connect (OSTI)

    Augustson, S.J.

    1984-01-01

    CG-ADP-1, the Automatic Data Processing Security Classification Guide, provides for classification guidance (for security information) concerning the protection of Department of Energy (DOE) and DOE contractor Automatic Data Processing (ADP) systems which handle classified information. Within the DOE, ADP facilities that process classified information provide potentially lucrative targets for compromise. In conjunction with the security measures required by DOE regulations, necessary precautions must be taken to protect details of those ADP security measures which could aid in their own subversion. Accordingly, the basic principle underlying ADP security classification policy is to protect information which could be of significant assistance in gaining unauthorized access to classified information being processed at an ADP facility. Given this policy, classification topics and guidelines are approved for implementation. The basic program guide, CG-ADP-1 is broad in scope and based upon it, more detailed local guides are sometimes developed and approved for specific sites. Classification topics are provided for system features, system and security management, and passwords. Site-specific topics can be addressed in local guides if needed.

  16. Application of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the...

  17. 2014 call for NERSC's Data Intensive Computing Pilot Program...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NERSC's Data Intensive Computing Pilot Program 2014 call for NERSC's Data Intensive Computing Pilot Program Due December 10 November 18, 2013 by Francesca Verdier (0 Comments)...

  18. Refurbishment program of HANARO control computer system

    SciTech Connect (OSTI)

    Kim, H. K.; Choe, Y. S.; Lee, M. W.; Doo, S. K.; Jung, H. S. [Korea Atomic Energy Research Inst., 989-111 Daedeok-daero, Yuseong, Daejeon, 305-353 (Korea, Republic of)

    2012-07-01

    HANARO, an open-tank-in-pool type research reactor with 30 MW thermal power, achieved its first criticality in 1995. The programmable controller system MLC (Multi Loop Controller) manufactured by MOORE has been used to control and regulate HANARO since 1995. We made a plan to replace the control computer because the system supplier no longer provided technical support and thus no spare parts were available. Aged and obsolete equipment and the shortage of spare parts supply could have caused great problems. The first consideration for a replacement of the control computer dates back to 2007. The supplier did not produce the components of MLC so that this system would no longer be guaranteed. We established the upgrade and refurbishment program in 2009 so as to keep HANARO up to date in terms of safety. We designed the new control computer system that would replace MLC. The new computer system is HCCS (HANARO Control Computer System). The refurbishing activity is in progress and will finish in 2013. The goal of the refurbishment program is a functional replacement of the reactor control system in consideration of suitable interfaces, compliance with no special outage for installation and commissioning, and no change of the well-proved operation philosophy. HCCS is a DCS (Discrete Control System) using PLC manufactured by RTP. To enhance the reliability, we adapt a triple processor system, double I/O system and hot swapping function. This paper describes the refurbishment program of the HANARO control system including the design requirements of HCCS. (authors)

  19. Computer System, Cluster, and Networking Summer Institute Program Description

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System, Cluster, and Networking Summer Institute Program Description The Computer System, Cluster, and Networking Summer Institute (CSCNSI) is a focused technical enrichment program targeting third-year college undergraduate students currently engaged in a computer science, computer engineering, or similar major. The program emphasizes practical skill development in setting up, configuring, administering, testing, monitoring, and scheduling computer systems, supercomputer clusters, and computer

  20. ALCF Data Science Program | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ALCF Data Science Program The ALCF Data Science Program (ADSP) is targeted at "big data" science problems that require the scale and performance of leadership computing resources. ...

  1. Seventy Years of Computing in the Nuclear Weapons Program

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Seventy Years of Computing in the Nuclear Weapons Program Seventy Years of Computing in the Nuclear Weapons Program WHEN: Jan 13, 2015 7:30 PM - 8:00 PM WHERE: Fuller Lodge Central ...

  2. Intro to computer programming, no computer required! | Argonne...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... "Computational thinking requires you to think in abstractions," said Papka, who spoke to computer science and computer-aided design students at Kaneland High School in Maple Park about ...

  3. Early Science Program | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Science at ALCF Allocation Programs INCITE Program ALCC Program Director's Discretionary (DD) Program ALCF Data Science Program Early Science Program ALCF Theta Early Science Program: Call for Proposals ALCF Theta Early Science Program: Proposal Instructions INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Early Science Program As part of the process of bringing a new supercomputer into production, the ALCF hosts the

  4. Argonne Training Program on Extreme-Scale Computing Scheduled...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    This program provides intensive hands-on training on the key skills, approaches and tools to design, implement, and execute computational science and engineering applications on ...

  5. Computer System, Cluster, and Networking Summer Institute Program...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System, Cluster, and Networking Summer Institute Program Description The Computer System, Cluster, and Networking Summer Institute (CSCNSI) is a focused technical enrichment ...

  6. Mira Early Science Program | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    HPC architectures. Together, the 16 projects span a diverse range of scientific fields, numerical methods, programming models, and computational approaches. The latter include...

  7. Method and computer program product for maintenance and modernization backlogging

    DOE Patents [OSTI]

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  8. Data Intensive Computing Pilot Program 2012/2013 Awards

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Data 2012/2013 Awards Data Intensive Computing Pilot Program 2012/2013 Awards NERSC's new data-intensive science pilot program is aimed at helping scientists capture, analyze and store the increasing stream of scientific data coming out of experiments, simulations and instruments. Projects in this program have been allocated for 2012 and 2013. High Throughput Computational Screening of Energy Materials Gerbrand Ceder, Massachusetts Institute of Technology NERSC Repository: matdat NERSC Resources

  9. Seventy Years of Computing in the Nuclear Weapons Program

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Seventy Years of Computing in the Nuclear Weapons Program Seventy Years of Computing in the Nuclear Weapons Program WHEN: Jan 13, 2015 7:30 PM - 8:00 PM WHERE: Fuller Lodge Central Avenue, Los Alamos, NM, USA SPEAKER: Bill Archer of the Weapons Physics (ADX) Directorate CONTACT: Bill Archer 505 665 7235 CATEGORY: Science INTERNAL: Calendar Login Event Description Rich history of computing in the Laboratory's weapons program. The talk is free and open to the public and is part of the 2014-15 Los

  10. Applicaiton of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop...

  11. UFO (UnFold Operator) computer program abstract

    SciTech Connect (OSTI)

    Kissel, L.; Biggs, F.

    1982-11-01

    UFO (UnFold Operator) is an interactive user-oriented computer program designed to solve a wide range of problems commonly encountered in physical measurements. This document provides a summary of the capabilities of version 3A of UFO.

  12. ORISE Resources: Equal Access Initiative Computer Grants Program

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Equal Access Initiative Computer Grants Program The Equal Access Initiative Computer Grants Program is sponsored by the National Minority AIDS Council (NMAC) and the National Institutes of Health's Office of AIDS Research (OAR), to help community organizations build technological capacity with the goal of enhancing their ability to provide HIV/AIDS prevention and treatment information for their clients and communities. Each year, qualified faith-/community-based organizations in the United

  13. Computer programs for multilocus haplotyping of general pedigrees

    SciTech Connect (OSTI)

    Weeks, D.E.; O`Connell, J.R.; Sobel, E.

    1995-06-01

    We have recently developed and implemented three different computer algorithms for accurate haplotyping with large numbers of codominant markers. Each of these algorithms employs likelihood criteria that correctly incorporate all intermarker recombination fractions. The three programs, HAPLO, SIMCROSS, and SIMWALK, are now available for haplotying general pedigrees. The HAPLO program will be distributed as part of the Programs for Pedigree Analysis package by Kenneth Lange. The SIMCROSS and SIMWALK programs are available by anonymous ftp from watson.hgen.pitt.edu. Each program is written in FORTRAN 77 and is distributed as source code. 15 refs.

  14. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  15. Computer Science Program | U.S. DOE Office of Science (SC)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer Science Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop ...

  16. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    SciTech Connect (OSTI)

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  17. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    SciTech Connect (OSTI)

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  18. Computer programs for eddy-current defect studies

    SciTech Connect (OSTI)

    Pate, J. R.; Dodd, C. V.

    1990-06-01

    Several computer programs to aid in the design of eddy-current tests and probes have been written. The programs, written in Fortran, deal in various ways with the response to defects exhibited by four types of probes: the pancake probe, the reflection probe, the circumferential boreside probe, and the circumferential encircling probe. Programs are included which calculate the impedance or voltage change in a coil due to a defect, which calculate and plot the defect sensitivity factor of a coil, and which invert calculated or experimental readings to obtain the size of a defect. The theory upon which the programs are based is the Burrows point defect theory, and thus the calculations of the programs will be more accurate for small defects. 6 refs., 21 figs.

  19. Application and implementation of transient algorithms in computer programs

    SciTech Connect (OSTI)

    Benson, D.J.

    1985-07-01

    This presentation gives a brief introduction to the nonlinear finite element programs developed at Lawrence Livermore National Laboratory by the Methods Development Group in the Mechanical Engineering Department. The four programs are DYNA3D and DYNA2D, which are explicit hydrocodes, and NIKE3D and NIKE2D, which are implicit programs. The presentation concentrates on DYNA3D with asides about the other programs. During the past year several new features were added to DYNA3D, and major improvements were made in the computational efficiency of the shell and beam elements. Most of these new features and improvements will eventually make their way into the other programs. The emphasis in our computational mechanics effort has always been, and continues to be, efficiency. To get the most out of our supercomputers, all Crays, we have vectorized the programs as much as possible. Several of the more interesting capabilities of DYNA3D will be described and their impact on efficiency will be discussed. Some of the recent work on NIKE3D and NIKE2D will also be presented. In the belief that a single example is worth a thousand equations, we are skipping the theory entirely and going directly to the examples.

  20. Final Report: Center for Programming Models for Scalable Parallel Computing

    SciTech Connect (OSTI)

    Mellor-Crummey, John

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  1. PET computer programs for use with the 88-inch cyclotron

    SciTech Connect (OSTI)

    Gough, R.A.; Chlosta, L.

    1981-06-01

    This report describes in detail several offline programs written for the PET computer which provide an efficient data management system to assist with the operation of the 88-Inch Cyclotron. This function includes the capability to predict settings for all cyclotron and beam line parameters for all beams within the present operating domain of the facility. The establishment of a data base for operational records is also described from which various aspects of the operating history can be projected.

  2. An Information Dependant Computer Program for Engine Exhaust Heat Recovery

    Broader source: Energy.gov (indexed) [DOE]

    for Heating | Department of Energy A computer program was developed to help engineers at rural Alaskan village power plants to quickly evaluate how to use exhaust waste heat from individual diesel power plants. PDF icon deer09_avadhanula.pdf More Documents & Publications Modular Low Cost High Energy Exhaust Heat Thermoelectric Generator with Closed-Loop Exhaust By-Pass System Exhaust Heat Recovery for Rural Alaskan Diesel Generators Exhaust Energy Recovery

  3. final report for Center for Programming Models for Scalable Parallel Computing

    SciTech Connect (OSTI)

    Johnson, Ralph E

    2013-04-10

    This is the final report of the work on parallel programming patterns that was part of the Center for Programming Models for Scalable Parallel Computing

  4. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect (OSTI)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  5. Viscosity index calculated by program in GW-basic for personal computers

    SciTech Connect (OSTI)

    Anaya, C.; Bermudez, O. )

    1988-12-26

    A computer program has been developed to calculate the viscosity index of oils when viscosities at two temperatures are known.

  6. Advanced Simulation and Computing and Institutional R&D Programs | National

    National Nuclear Security Administration (NNSA)

    Nuclear Security Administration Programs Advanced Simulation and Computing and Institutional R&D Programs The Advanced Simulation and Computing (ASC) Program supports the Department of Energy's National Nuclear Security Administration (DOE/NNSA) Defense Programs' use of simulation-based evaluation of the nation's nuclear weapons stockpile. The ASC Program is responsible for providing the simulation tools and computing environments required to qualify and certify the nation's nuclear

  7. 2014 call for NERSC's Data Intensive Computing Pilot Program Due December

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    10 NERSC's Data Intensive Computing Pilot Program 2014 call for NERSC's Data Intensive Computing Pilot Program Due December 10 November 18, 2013 by Francesca Verdier NERSC's Data Intensive Computing Pilot Program is now open for its second round of allocations to projects in data intensive science. This pilot aims to support and enable scientists to tackle their most demanding data intensive challenges. Selected projects will be piloting new methods and technologies targeting data

  8. DOE High Performance Computing for Manufacturing (HPC4Mfg) Program...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    with DOE's national labs to use the labs' high-performance computing (HPC) systems to upgrade their manufacturing processes and bring new clean energy technologies to market. ...

  9. DOE High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Fund New Proposals To Jumpstart Energy Technologies | Department of Energy High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To Fund New Proposals To Jumpstart Energy Technologies DOE High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To Fund New Proposals To Jumpstart Energy Technologies March 18, 2016 - 3:31pm Addthis News release from Lawrence Livermore National Laboratory, March 17 2016 LIVERMORE, Calif - A new U.S. Department of Energy (DOE) program

  10. The ENERGY-10 design-tool computer program

    SciTech Connect (OSTI)

    Balcomb, J.D.; Crowder, R.S. III.

    1995-11-01

    ENERGY-10 is a PC-based building energy simulation program for smaller commercial and institutional buildings that is specifically designed to evaluate energy-efficient features in the very early stages of the architectural design process. Developed specifically as a design tool, the program makes it easy to evaluate the integration of daylighting, passive solar design, low-energy cooling, and energy-efficient equipment into high-performance buildings. The simulation engines perform whole-building energy analysis for 8760 hours per year including both daylighting and dynamic thermal calculations. The primary target audience for the program is building designers, especially architects, but also includes HVAC engineers, utility officials, and architecture and engineering students and professors.

  11. Workshop on programming languages for high performance computing (HPCWPL): final report.

    SciTech Connect (OSTI)

    Murphy, Richard C.

    2007-05-01

    This report summarizes the deliberations and conclusions of the Workshop on Programming Languages for High Performance Computing (HPCWPL) held at the Sandia CSRI facility in Albuquerque, NM on December 12-13, 2006.

  12. Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities

    Broader source: Energy.gov [DOE]

    Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop October 25, 2011

  13. Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software

    SciTech Connect (OSTI)

    Lee, J.R.

    1998-11-01

    As computing assumes a more central role in managing the nuclear stockpile, the consequences of an erroneous computer simulation could be severe. Computational failures are common in other endeavors and have caused project failures, significant economic loss, and loss of life. This report examines the causes of software failure and proposes steps to mitigate them. A formal verification and validation program for scientific software is recommended and described.

  14. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear

  15. Wind energy conversion system analysis model (WECSAM) computer program documentation

    SciTech Connect (OSTI)

    Downey, W T; Hendrick, P L

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation. Thus, any user-supplied data for WECS performance, application load, utility rates, or wind resource may be entered into the scratch file to override the default data-base value. After the model and the inputs required from the user and derived from the data base are described, the model output and the various output options that can be exercised by the user are detailed. The general operation is set forth and suggestions are made for efficient modes of operation. Sample listings of various input, output, and data-base files are appended. (LEW)

  16. High performance computing and communications grand challenges program

    SciTech Connect (OSTI)

    Solomon, J.E.; Barr, A.; Chandy, K.M.; Goddard, W.A., III; Kesselman, C.

    1994-10-01

    The so-called protein folding problem has numerous aspects, however it is principally concerned with the {ital de novo} prediction of three-dimensional (3D) structure from the protein primary amino acid sequence, and with the kinetics of the protein folding process. Our current project focuses on the 3D structure prediction problem which has proved to be an elusive goal of molecular biology and biochemistry. The number of local energy minima is exponential in the number of amino acids in the protein. All current methods of 3D structure prediction attempt to alleviate this problem by imposing various constraints that effectively limit the volume of conformational space which must be searched. Our Grand Challenge project consists of two elements: (1) a hierarchical methodology for 3D protein structure prediction; and (2) development of a parallel computing environment, the Protein Folding Workbench, for carrying out a variety of protein structure prediction/modeling computations. During the first three years of this project, we are focusing on the use of two proteins selected from the Brookhaven Protein Data Base (PDB) of known structure to provide validation of our prediction algorithms and their software implementation, both serial and parallel. Both proteins, protein L from {ital peptostreptococcus magnus}, and {ital streptococcal} protein G, are known to bind to IgG, and both have an {alpha} {plus} {beta} sandwich conformation. Although both proteins bind to IgG, they do so at different sites on the immunoglobin and it is of considerable biological interest to understand structurally why this is so. 12 refs., 1 fig.

  17. IPM: A Post-MPI Programming Model | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IPM: A Post-MPI Programming Model Event Sponsor: Mathematics and Computer Science Division LANS Seminar Start Date: Apr 19 2016 - 3:00pm Building/Room: Building 240/Room 1406-1407 Location: Argonne National Laboratory Speaker(s): Barry Smith Junchao Zhang Speaker(s) Title: Computational Mathematicians, ANL-MCS Event Website: http://www.mcs.anl.gov/research/LANS/events/listn/ The MPI parallel programming model has been a very successful parallel programming model for over twenty years. Though

  18. A computer program to determine the specific power of prismatic-core reactors

    SciTech Connect (OSTI)

    Dobranich, D.

    1987-05-01

    A computer program has been developed to determine the maximum specific power for prismatic-core reactors as a function of maximum allowable fuel temperature, core pressure drop, and coolant velocity. The prismatic-core reactors consist of hexagonally shaped fuel elements grouped together to form a cylindrically shaped core. A gas coolant flows axially through circular channels within the elements, and the fuel is dispersed within the solid element material either as a composite or in the form of coated pellets. Different coolant, fuel, coating, and element materials can be selected to represent different prismatic-core concepts. The computer program allows the user to divide the core into any arbitrary number of axial levels to account for different axial power shapes. An option in the program allows the automatic determination of the core height that results in the maximum specific power. The results of parametric specific power calculations using this program are presented for various reactor concepts.

  19. Example Program and Makefile for BG/Q | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Example Program and Makefile for BG/Q

  20. Methods, systems, and computer program products for network firewall policy optimization

    DOE Patents [OSTI]

    Fulp, Errin W.; Tarsa, Stephen J.

    2011-10-18

    Methods, systems, and computer program products for firewall policy optimization are disclosed. According to one method, a firewall policy including an ordered list of firewall rules is defined. For each rule, a probability indicating a likelihood of receiving a packet matching the rule is determined. The rules are sorted in order of non-increasing probability in a manner that preserves the firewall policy.

  1. Finite Volume Based Computer Program for Ground Source Heat Pump System

    SciTech Connect (OSTI)

    Menart, James A.

    2013-02-22

    This report is a compilation of the work that has been done on the grant DE-EE0002805 entitled ?Finite Volume Based Computer Program for Ground Source Heat Pump Systems.? The goal of this project was to develop a detailed computer simulation tool for GSHP (ground source heat pump) heating and cooling systems. Two such tools were developed as part of this DOE (Department of Energy) grant; the first is a two-dimensional computer program called GEO2D and the second is a three-dimensional computer program called GEO3D. Both of these simulation tools provide an extensive array of results to the user. A unique aspect of both these simulation tools is the complete temperature profile information calculated and presented. Complete temperature profiles throughout the ground, casing, tube wall, and fluid are provided as a function of time. The fluid temperatures from and to the heat pump, as a function of time, are also provided. In addition to temperature information, detailed heat rate information at several locations as a function of time is determined. Heat rates between the heat pump and the building indoor environment, between the working fluid and the heat pump, and between the working fluid and the ground are computed. The heat rates between the ground and the working fluid are calculated as a function time and position along the ground loop. The heating and cooling loads of the building being fitted with a GSHP are determined with the computer program developed by DOE called ENERGYPLUS. Lastly COP (coefficient of performance) results as a function of time are provided. Both the two-dimensional and three-dimensional computer programs developed as part of this work are based upon a detailed finite volume solution of the energy equation for the ground and ground loop. Real heat pump characteristics are entered into the program and used to model the heat pump performance. Thus these computer tools simulate the coupled performance of the ground loop and the heat pump. The price paid for the three-dimensional detail is the large computational times required with GEO3D. The computational times required for GEO2D are reasonable, a few minutes for a 20 year simulation. For a similar simulation, GEO3D takes days of computational time. Because of the small simulation times with GEO2D, a number of attractive features have been added to it. GEO2D has a user friendly interface where inputs and outputs are all handled with GUI (graphical user interface) screens. These GUI screens make the program exceptionally easy to use. To make the program even easier to use a number of standard input options for the most common GSHP situations are provided to the user. For the expert user, the option still exists to enter their own detailed information. To further help designers and GSHP customers make decisions about a GSHP heating and cooling system, cost estimates are made by the program. These cost estimates include a payback period graph to show the user where their GSHP system pays for itself. These GSHP simulation tools should be a benefit to the advancement of GSHP systems.

  2. Recovery Act: Finite Volume Based Computer Program for Ground Source Heat Pump Systems

    SciTech Connect (OSTI)

    James A Menart, Professor

    2013-02-22

    This report is a compilation of the work that has been done on the grant DE-EE0002805 entitled “Finite Volume Based Computer Program for Ground Source Heat Pump Systems.” The goal of this project was to develop a detailed computer simulation tool for GSHP (ground source heat pump) heating and cooling systems. Two such tools were developed as part of this DOE (Department of Energy) grant; the first is a two-dimensional computer program called GEO2D and the second is a three-dimensional computer program called GEO3D. Both of these simulation tools provide an extensive array of results to the user. A unique aspect of both these simulation tools is the complete temperature profile information calculated and presented. Complete temperature profiles throughout the ground, casing, tube wall, and fluid are provided as a function of time. The fluid temperatures from and to the heat pump, as a function of time, are also provided. In addition to temperature information, detailed heat rate information at several locations as a function of time is determined. Heat rates between the heat pump and the building indoor environment, between the working fluid and the heat pump, and between the working fluid and the ground are computed. The heat rates between the ground and the working fluid are calculated as a function time and position along the ground loop. The heating and cooling loads of the building being fitted with a GSHP are determined with the computer program developed by DOE called ENERGYPLUS. Lastly COP (coefficient of performance) results as a function of time are provided. Both the two-dimensional and three-dimensional computer programs developed as part of this work are based upon a detailed finite volume solution of the energy equation for the ground and ground loop. Real heat pump characteristics are entered into the program and used to model the heat pump performance. Thus these computer tools simulate the coupled performance of the ground loop and the heat pump. The price paid for the three-dimensional detail is the large computational times required with GEO3D. The computational times required for GEO2D are reasonable, a few minutes for a 20 year simulation. For a similar simulation, GEO3D takes days of computational time. Because of the small simulation times with GEO2D, a number of attractive features have been added to it. GEO2D has a user friendly interface where inputs and outputs are all handled with GUI (graphical user interface) screens. These GUI screens make the program exceptionally easy to use. To make the program even easier to use a number of standard input options for the most common GSHP situations are provided to the user. For the expert user, the option still exists to enter their own detailed information. To further help designers and GSHP customers make decisions about a GSHP heating and cooling system, cost estimates are made by the program. These cost estimates include a payback period graph to show the user where their GSHP system pays for itself. These GSHP simulation tools should be a benefit to the advancement of GSHP system

  3. SNOW: a digital computer program for the simulation of ion beam devices

    SciTech Connect (OSTI)

    Boers, J.E.

    1980-08-01

    A digital computer program, SNOW, has been developed for the simulation of dense ion beams. The program simulates the plasma expansion cup (but not the plasma source itself), the acceleration region, and a drift space with neutralization if desired. The ion beam is simulated by computing representative trajectories through the device. The potentials are simulated on a large rectangular matrix array which is solved by iterative techniques. Poisson's equation is solved at each point within the configuration using space-charge densities computed from the ion trajectories combined with background electron and/or ion distributions. The simulation methods are described in some detail along with examples of both axially-symmetric and rectangular beams. A detailed description of the input data is presented.

  4. The Radiological Safety Analysis Computer Program (RSAC-5) user`s manual. Revision 1

    SciTech Connect (OSTI)

    Wenzel, D.R.

    1994-02-01

    The Radiological Safety Analysis Computer Program (RSAC-5) calculates the consequences of the release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory from either reactor operating history or nuclear criticalities. RSAC-5 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated through the inhalation, immersion, ground surface, and ingestion pathways. RSAC+, a menu-driven companion program to RSAC-5, assists users in creating and running RSAC-5 input files. This user`s manual contains the mathematical models and operating instructions for RSAC-5 and RSAC+. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-5 and RSAC+. These programs are designed for users who are familiar with radiological dose assessment methods.

  5. Towards an Abstraction-Friendly Programming Model for High Productivity and High Performance Computing

    SciTech Connect (OSTI)

    Liao, C; Quinlan, D; Panas, T

    2009-10-06

    General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will apply our programming model to more large scale applications. In particular, we plan to classify and formalize more high level abstractions and semantics which are relevant to high performance computing. We will also investigate better ways to allow language designers, library developers and programmers to communicate abstraction and semantics information with each other.

  6. Princeton graduate student Imène Goumiri creates computer program that

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    helps stabilize fusion plasmas | Princeton Plasma Physics Lab Princeton graduate student Imène Goumiri creates computer program that helps stabilize fusion plasmas By John Greenwald and Raphael Rosen April 14, 2016 Tweet Widget Google Plus One Share on Facebook Imène Goumiri, a Princeton University graduate student, has worked with physicists at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) to simulate a method for limiting instabilities that reduce the

  7. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing /newsroom/_assets/images/computing-icon.png Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest. Health Space Computing Energy Earth Materials Science Technology The Lab All Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable

  8. User's guide to SERICPAC: A computer program for calculating electric-utility avoided costs rates

    SciTech Connect (OSTI)

    Wirtshafter, R.; Abrash, M.; Koved, M.; Feldman, S.

    1982-05-01

    SERICPAC is a computer program developed to calculate average avoided cost rates for decentralized power producers and cogenerators that sell electricity to electric utilities. SERICPAC works in tandem with SERICOST, a program to calculate avoided costs, and determines the appropriate rates for buying and selling of electricity from electric utilities to qualifying facilities (QF) as stipulated under Section 210 of PURA. SERICPAC contains simulation models for eight technologies including wind, hydro, biogas, and cogeneration. The simulations are converted in a diversified utility production which can be either gross production or net production, which accounts for an internal electricity usage by the QF. The program allows for adjustments to the production to be made for scheduled and forced outages. The final output of the model is a technology-specific average annual rate. The report contains a description of the technologies and the simulations as well as complete user's guide to SERICPAC.

  9. MP Salsa: a finite element computer program for reacting flow problems. Part 1--theoretical development

    SciTech Connect (OSTI)

    Shadid, J.N.; Moffat, H.K.; Hutchinson, S.A.; Hennigan, G.L.; Devine, K.D.; Salinger, A.G.

    1996-05-01

    The theoretical background for the finite element computer program, MPSalsa, is presented in detail. MPSalsa is designed to solve laminar, low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow, heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solver coupled multiple Poisson or advection-diffusion- reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMKIN, respectively. The code employs unstructured meshes, using the EXODUS II finite element data base suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec solver library.

  10. Method, systems, and computer program products for implementing function-parallel network firewall

    DOE Patents [OSTI]

    Fulp, Errin W.; Farley, Ryan J.

    2011-10-11

    Methods, systems, and computer program products for providing function-parallel firewalls are disclosed. According to one aspect, a function-parallel firewall includes a first firewall node for filtering received packets using a first portion of a rule set including a plurality of rules. The first portion includes less than all of the rules in the rule set. At least one second firewall node filters packets using a second portion of the rule set. The second portion includes at least one rule in the rule set that is not present in the first portion. The first and second portions together include all of the rules in the rule set.

  11. DUPLEX: A molecular mechanics program in torsion angle space for computing structures of DNA and RNA

    SciTech Connect (OSTI)

    Hingerty, B.E.

    1992-07-01

    DUPLEX produces energy minimized structures of DNA and RNA of any base sequence for single and double strands. The smallest subunits are deoxydinucleoside monophosphates, and up to 12 residues, single or double stranded can be treated. In addition, it can incorporate NMR derived interproton distances an constraints in the minimizations. Both upper and lower bounds for these distances can be specified. The program has been designed to run on a UNICOS Cray supercomputer, but should run, albeit slowly, on a laboratory computer such as a VAX or a workstation.

  12. Princeton graduate student Imène Goumiri creates computer program that

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    helps stabilize fusion plasmas | Princeton Plasma Physics Lab Princeton graduate student Imène Goumiri creates computer program that helps stabilize fusion plasmas By John Greenwald and Raphael Rosen April 14, 2016 Tweet Widget Google Plus One Share on Facebook Imène Goumiri led the design of a controller. (Photo by Elle Starkman/Office of Communications) Imène Goumiri led the design of a controller. Imène Goumiri, a Princeton University graduate student, has worked with physicists at

  13. REFLECT: A computer program for the x-ray reflectivity of bent perfect crystals

    SciTech Connect (OSTI)

    Etelaeniemi, V.; Suortti, P.; Thomlinson, W. . Dept. of Physics; Brookhaven National Lab., Upton, NY )

    1989-09-01

    The design of monochromators for x-ray applications, using either standard laboratory sources on synchrotron radiation sources, requires a knowledge of the reflectivity of the crystals. The reflectivity depends on the crystals used, the geometry of the reflection, the energy range of the radiation, and, in the present case, the cylindrical bending radius of the optical device. This report is intended to allow the reader to become familiar with, and therefore use, a computer program called REFLECT which we have used in the design of a dual beam Laue monochromator for synchrotron angiography. The results of REFLECT have been compared to measured reflectivities for both bent Bragg and Laue geometries. The results are excellent and should give full confidence in the use of the program. 6 refs.

  14. THE SAP3 COMPUTER PROGRAM FOR QUANTITATIVE MULTIELEMENT ANALYSIS BY ENERGY DISPERSIVE X-RAY FLUORESCENCE

    SciTech Connect (OSTI)

    Nielson, K. K.; Sanders, R. W.

    1982-04-01

    SAP3 is a dual-function FORTRAN computer program which performs peak analysis of energy-dispersive x-ray fluorescence spectra and then quantitatively interprets the results of the multielement analysis. It was written for mono- or bi-chromatic excitation as from an isotopic or secondary excitation source, and uses the separate incoherent and coherent backscatter intensities to define the bulk sample matrix composition. This composition is used in performing fundamental-parameter matrix corrections for self-absorption, enhancement, and particle-size effects, obviating the need for specific calibrations for a given sample matrix. The generalized calibration is based on a set of thin-film sensitivities, which are stored in a library disk file and used for all sample matrices and thicknesses. Peak overlap factors are also determined from the thin-film standards, and are stored in the library for calculating peak overlap corrections. A detailed description is given of the algorithms and program logic, and the program listing and flow charts are also provided. An auxiliary program, SPCAL, is also given for use in calibrating the backscatter intensities. SAP3 provides numerous analysis options via seventeen control switches which give flexibility in performing the calculations best suited to the sample and the user needs. User input may be limited to the name of the library, the analysis livetime, and the spectrum filename and location. Output includes all peak analysis information, matrix correction factors, and element concentrations, uncertainties and detection limits. Twenty-four elements are typically determined from a 1024-channel spectrum in one-to-two minutes using a PDP-11/34 computer operating under RSX-11M.

  15. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Software Computations Uncertainty Quantification Stochastic About CRF Transportation Energy Consortiums Engine Combustion Heavy Duty Heavy Duty Low-Temperature & Diesel Combustion ...

  16. Computer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    I. INTRODUCTION This paper presents several computational tools required for processing images of a heavy ion beam and estimating the magnetic field within a plasma. The...

  17. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  18. Open-cycle ocean thermal energy conversion surface-condenser design analysis and computer program

    SciTech Connect (OSTI)

    Panchal, C.B.; Rabas, T.J.

    1991-05-01

    This report documents a computer program for designing a surface condenser that condenses low-pressure steam in an ocean thermal energy conversion (OTEC) power plant. The primary emphasis is on the open-cycle (OC) OTEC power system, although the same condenser design can be used for conventional and hybrid cycles because of their highly similar operating conditions. In an OC-OTEC system, the pressure level is very low (deep vacuums), temperature differences are small, and the inlet noncondensable gas concentrations are high. Because current condenser designs, such as the shell-and-tube, are not adequate for such conditions, a plate-fin configuration is selected. This design can be implemented in aluminum, which makes it very cost-effective when compared with other state-of-the-art vacuum steam condenser designs. Support for selecting a plate-fin heat exchanger for OC-OTEC steam condensation can be found in the sizing (geometric details) and rating (heat transfer and pressure drop) calculations presented. These calculations are then used in a computer program to obtain all the necessary thermal performance details for developing design specifications for a plate-fin steam condenser. 20 refs., 5 figs., 5 tabs.

  19. NASTRAN-based computer program for structural dynamic analysis of horizontal axis wind turbines

    SciTech Connect (OSTI)

    Lobitz, D.W.

    1984-01-01

    This paper describes a computer program developed for structural dynamic analysis of horizontal axis wind turbines (HAWTs). It is based on the finite element method through its reliance on NASTRAN for the development of mass, stiffness, and damping matrices of the tower and rotor, which are treated in NASTRAN as separate structures. The tower is modeled in a stationary frame and the rotor in one rotating at a constant angular velocity. The two structures are subsequently joined together (external to NASTRAN) using a time-dependent transformation consistent with the hub configuration. Aerodynamic loads are computed with an established flow model based on strip theory. Aeroelastic effects are included by incorporating the local velocity and twisting deformation of the blade in the load computation. The turbulent nature of the wind, both in space and time, is modeled by adding in stochastic wind increments. The resulting equations of motion are solved in the time domain using the implicit Newmark-Beta integrator. Preliminary comparisons with data from the Boeing/NASA MOD2 HAWT indicate that the code is capable of accurately and efficiently predicting the response of HAWTs driven by turbulent winds.

  20. Computational Analysis of an Evolutionarily Conserved VertebrateMuscle Alternative Splicing Program

    SciTech Connect (OSTI)

    Das, Debopriya; Clark, Tyson A.; Schweitzer, Anthony; Marr,Henry; Yamamoto, Miki L.; Parra, Marilyn K.; Arribere, Josh; Minovitsky,Simon; Dubchak, Inna; Blume, John E.; Conboy, John G.

    2006-06-15

    A novel exon microarray format that probes gene expression with single exon resolution was employed to elucidate critical features of a vertebrate muscle alternative splicing program. A dataset of 56 microarray-defined, muscle-enriched exons and their flanking introns were examined computationally in order to investigate coordination of the muscle splicing program. Candidate intron regulatory motifs were required to meet several stringent criteria: significant over-representation near muscle-enriched exons, correlation with muscle expression, and phylogenetic conservation among genomes of several vertebrate orders. Three classes of regulatory motifs were identified in the proximal downstream intron, within 200nt of the target exons: UGCAUG, a specific binding site for Fox-1 related splicing factors; ACUAAC, a novel branchpoint-like element; and UG-/UGC-rich elements characteristic of binding sites for CELF splicing factors. UGCAUG was remarkably enriched, being present in nearly one-half of all cases. These studies suggest that Fox and CELF splicing factors play a major role in enforcing the muscle-specific alternative splicing program, facilitating expression of a set of unique isoforms of cytoskeletal proteins that are critical to muscle cell differentiation. Supplementary materials: There are four supplementary tables and one supplementary figure. The tables provide additional detailed information concerning the muscle-enriched datasets, and about over-represented oligonucleotide sequences in the flanking introns. The supplementary figure shows RT-PCR data confirming the muscle-enriched expression of exons predicted from the microarray analysis.

    1. Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. Application and System Memory Use, ...

    2. LIAR -- A computer program for the modeling and simulation of high performance linacs

      SciTech Connect (OSTI)

      Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

      1997-04-01

      The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm.

    3. Radiological Safety Analysis Computer (RSAC) Program Version 7.2 Users Manual

      SciTech Connect (OSTI)

      Dr. Bradley J Schrader

      2010-10-01

      The Radiological Safety Analysis Computer (RSAC) Program Version 7.2 (RSAC-7) is the newest version of the RSAC legacy code. It calculates the consequences of a release of radionuclides to the atmosphere. A user can generate a fission product inventory from either reactor operating history or a nuclear criticality event. RSAC-7 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates the decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated for inhalation, air immersion, ground surface, ingestion, and cloud gamma pathways. RSAC-7 can be used as a tool to evaluate accident conditions in emergency response scenarios, radiological sabotage events and to evaluate safety basis accident consequences. This users manual contains the mathematical models and operating instructions for RSAC-7. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-7. This program was designed for users who are familiar with radiological dose assessment methods.

    4. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, ... The DOE Office of Science's Advanced Scientific Computing Research (ASCR) program ...

    5. CriTi-CAL: A computer program for Critical Coiled Tubing Calculations

      SciTech Connect (OSTI)

      He, X.

      1995-12-31

      A computer software package for simulating coiled tubing operations has been developed at Rogaland Research. The software is named CriTiCAL, for Critical Coiled Tubing Calculations. It is a PC program running under Microsoft Windows. CriTi-CAL is designed for predicting force, stress, torque, lockup, circulation pressure losses and along-hole-depth corrections for coiled tubing workover and drilling operations. CriTi-CAL features an user-friendly interface, integrated work string and survey editors, flexible input units and output format, on-line documentation and extensive error trapping. CriTi-CAL was developed by using a combination of Visual Basic and C. Such an approach is an effective way to quickly develop high quality small to medium size software for the oil industry. The software is based on the results of intensive experimental and theoretical studies on buckling and post-buckling of coiled tubing at Rogaland Research. The software has been validated by full-scale test results and field data.

    6. Efficiency Improvement Opportunities for Personal Computer Monitors. Implications for Market Transformation Programs

      SciTech Connect (OSTI)

      Park, Won Young; Phadke, Amol; Shah, Nihar

      2012-06-29

      Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that display efficiency will likely improve by over 40% by 2015 compared to todays technology. We evaluate the cost effectiveness of a key technology which further improves efficiency beyond this level by at least 20% and find that its adoption is cost effective. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus (USB) powered liquid crystal display (LCD) monitors and find that the current technology available and deployed in USB powered monitors has the potential to deeply reduce energy consumption by as much as 50%. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to capture global energy saving potential from PC monitors which we estimate to be 9.2 terawatt-hours [TWh] per year in 2015.

    7. Computer program for the sensitivity calculation of a CR-39 detector in a diffusion chamber for radon measurements

      SciTech Connect (OSTI)

      Nikezic, D. Stajic, J. M.; Yu, K. N.

      2014-02-15

      Computer software for calculation of the sensitivity of a CR-39 detector closed in a diffusion chamber to radon is described in this work. The software consists of two programs, both written in the standard Fortran 90 programming language. The physical background and a numerical example are given. Presented software is intended for numerous researches in radon measurement community. Previously published computer programs TRACK-TEST.F90 and TRACK-VISION.F90 [D. Nikezic and K. N. Yu, Comput. Phys. Commun. 174, 160 (2006); D. Nikezic and K. N. Yu, Comput. Phys. Commun. 178, 591 (2008)] are used here as subroutines to calculate the track parameters and to determine whether the track is visible or not, based on the incident angle, impact energy, etching conditions, gray level, and visibility criterion. The results obtained by the software, using five different V functions, were compared with the experimental data found in the literature. Application of two functions in this software reproduced experimental data very well, while other three gave lower sensitivity than experiment.

    8. About the ASCR Computer Science Program | U.S. DOE Office of...

      Office of Science (SC) Website

      computer architectures that incorporate new power efficient memory and storage systems. ... cache hierarchies not useful; 4) energy-efficient on-chip and off-chip communication ...

    9. Programs for attracting under-represented minority students to graduate school and research careers in computational science. Final report for period October 1, 1995 - September 30, 1997

      SciTech Connect (OSTI)

      Turner, James C. Jr.; Mason, Thomas; Guerrieri, Bruno

      1997-10-01

      Programs have been established at Florida A & M University to attract minority students to research careers in mathematics and computational science. The primary goal of the program was to increase the number of such students studying computational science via an interactive multimedia learning environment One mechanism used for meeting this goal was the development of educational modules. This academic year program established within the mathematics department at Florida A&M University, introduced students to computational science projects using high-performance computers. Additional activities were conducted during the summer, these included workshops, meetings, and lectures. Through the exposure provided by this program to scientific ideas and research in computational science, it is likely that their successful applications of tools from this interdisciplinary field will be high.

    10. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information From here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop. Kerberos. AFS. Printing. Recommended applications for various common tasks. Running CPU- or IO-intensive programs (batch jobs) Commonly encountered problems Computing support within BooNE Bringing a computer to FNAL, or purchasing a new one. Laptops. The Computer Security Program Plan for MiniBooNE The

    11. Fourth SIAM conference on mathematical and computational issues in the geosciences: Final program and abstracts

      SciTech Connect (OSTI)

      1997-12-31

      The conference focused on computational and modeling issues in the geosciences. Of the geosciences, problems associated with phenomena occurring in the earth`s subsurface were best represented. Topics in this area included petroleum recovery, ground water contamination and remediation, seismic imaging, parameter estimation, upscaling, geostatistical heterogeneity, reservoir and aquifer characterization, optimal well placement and pumping strategies, and geochemistry. Additional sessions were devoted to the atmosphere, surface water and oceans. The central mathematical themes included computational algorithms and numerical analysis, parallel computing, mathematical analysis of partial differential equations, statistical and stochastic methods, optimization, inversion, homogenization and renormalization. The problem areas discussed at this conference are of considerable national importance, with the increasing importance of environmental issues, global change, remediation of waste sites, declining domestic energy sources and an increasing reliance on producing the most out of established oil reservoirs.

    12. THERM3D -- A boundary element computer program for transient heat conduction problems

      SciTech Connect (OSTI)

      Ingber, M.S.

      1994-02-01

      The computer code THERM3D implements the direct boundary element method (BEM) to solve transient heat conduction problems in arbitrary three-dimensional domains. This particular implementation of the BEM avoids performing time-consuming domain integrations by approximating a ``generalized forcing function`` in the interior of the domain with the use of radial basis functions. An approximate particular solution is then constructed, and the original problem is transformed into a sequence of Laplace problems. The code is capable of handling a large variety of boundary conditions including isothermal, specified flux, convection, radiation, and combined convection and radiation conditions. The computer code is benchmarked by comparisons with analytic and finite element results.

    13. Energy Department's High Performance Computing for Manufacturing Program Seeks to Fund New Industry Proposals

      Broader source: Energy.gov [DOE]

      The U.S. Department of Energy (DOE) is seeking concept proposals from qualified U.S. manufacturers to participate in short-term, collaborative projects. Selectees will be given access to High Performance Computing facilities and will work with experienced DOE National Laboratories staff in addressing challenges in U.S. manufacturing.

    14. Opportunities for Russian Nuclear Weapons Institute developing computer-aided design programs for pharmaceutical drug discovery. Final report

      SciTech Connect (OSTI)

      1996-09-23

      The goal of this study is to determine whether physicists at the Russian Nuclear Weapons Institute can profitably service the need for computer aided drug design (CADD) programs. The Russian physicists` primary competitive advantage is their ability to write particularly efficient code able to work with limited computing power; a history of working with very large, complex modeling systems; an extensive knowledge of physics and mathematics, and price competitiveness. Their primary competitive disadvantage is their lack of biology, and cultural and geographic issues. The first phase of the study focused on defining the competitive landscape, primarily through interviews with and literature searches on the key providers of CADD software. The second phase focused on users of CADD technology to determine deficiencies in the current product offerings, to understand what product they most desired, and to define the potential demand for such a product.

    15. User's manual for RATEPAC: a digital-computer program for revenue requirements and rate-impact analysis

      SciTech Connect (OSTI)

      Fuller, L.C.

      1981-09-01

      The RATEPAC computer program is designed to model the financial aspects of an electric power plant or other investment requiring capital outlays and having annual operating expenses. The program produces incremental pro forma financial statements showing how an investment will affect the overall financial statements of a business entity. The code accepts parameters required to determine capital investment and expense as a function of time and sums these to determine minimum revenue requirements (cost of service). The code also calculates present worth of revenue requirements and required return on rate base. This user's manual includes a general description of the code as well as the instructions for input data preparation. A complete example case is appended.

    16. Eighth SIAM conference on parallel processing for scientific computing: Final program and abstracts

      SciTech Connect (OSTI)

      1997-12-31

      This SIAM conference is the premier forum for developments in parallel numerical algorithms, a field that has seen very lively and fruitful developments over the past decade, and whose health is still robust. Themes for this conference were: combinatorial optimization; data-parallel languages; large-scale parallel applications; message-passing; molecular modeling; parallel I/O; parallel libraries; parallel software tools; parallel compilers; particle simulations; problem-solving environments; and sparse matrix computations.

    17. MPSalsa a finite element computer program for reacting flow problems. Part 2 - user`s guide

      SciTech Connect (OSTI)

      Salinger, A.; Devine, K.; Hennigan, G.; Moffat, H.

      1996-09-01

      This manual describes the use of MPSalsa, an unstructured finite element (FE) code for solving chemically reacting flow problems on massively parallel computers. MPSalsa has been written to enable the rigorous modeling of the complex geometry and physics found in engineering systems that exhibit coupled fluid flow, heat transfer, mass transfer, and detailed reactions. In addition, considerable effort has been made to ensure that the code makes efficient use of the computational resources of massively parallel (MP), distributed memory architectures in a way that is nearly transparent to the user. The result is the ability to simultaneously model both three-dimensional geometries and flow as well as detailed reaction chemistry in a timely manner on MT computers, an ability we believe to be unique. MPSalsa has been designed to allow the experienced researcher considerable flexibility in modeling a system. Any combination of the momentum equations, energy balance, and an arbitrary number of species mass balances can be solved. The physical and transport properties can be specified as constants, as functions, or taken from the Chemkin library and associated database. Any of the standard set of boundary conditions and source terms can be adapted by writing user functions, for which templates and examples exist.

    18. MILDOS - A Computer Program for Calculating Environmental Radiation Doses from Uranium Recovery Operations

      SciTech Connect (OSTI)

      Strange, D. L.; Bander, T. J.

      1981-04-01

      The MILDOS Computer Code estimates impacts from radioactive emissions from uranium milling facilities. These impacts are presented as dose commitments to individuals and the regional population within an 80 km radius of the facility. Only airborne releases of radioactive materials are considered: releases to surface water and to groundwater are not addressed in MILDOS. This code is multi-purposed and can be used to evaluate population doses for NEPA assessments, maximum individual doses for predictive 40 CFR 190 compliance evaluations, or maximum offsite air concentrations for predictive evaluations of 10 CFR 20 compliance. Emissions of radioactive materials from fixed point source locations and from area sources are modeled using a sector-averaged Gaussian plume dispersion model, which utilizes user-provided wind frequency data. Mechanisms such as deposition of particulates, resuspension. radioactive decay and ingrowth of daughter radionuclides are included in the transport model. Annual average air concentrations are computed, from which subsequent impacts to humans through various pathways are computed. Ground surface concentrations are estimated from deposition buildup and ingrowth of radioactive daughters. The surface concentrations are modified by radioactive decay, weathering and other environmental processes. The MILDOS Computer Code allows the user to vary the emission sources as a step function of time by adjustinq the emission rates. which includes shutting them off completely. Thus the results of a computer run can be made to reflect changing processes throughout the facility's operational lifetime. The pathways considered for individual dose commitments and for population impacts are: • Inhalation • External exposure from ground concentrations • External exposure from cloud immersion • Ingestioo of vegetables • Ingestion of meat • Ingestion of milk • Dose commitments are calculated using dose conversion factors, which are ultimately based on recommendations of the International Commission on Radiological Protection (ICRP). These factors are fixed internally in the code, and are not part of the input option. Dose commitments which are available from the code are as follows: • Individual dose commitments for use in predictive 40 CFR 190 compliance evaluations (Radon and short-lived daughters are excluded) • Total individual dose commitments (impacts from all available radionuclides are considered) • Annual population dose commitments (regional, extraregional, total and cummulative). This model is primarily designed for uranium mill facilities, and should not be used for operations with different radionuclides or processes.

    19. Development of computer program ENMASK for prediction of residual environmental masking-noise spectra, from any three independent environmental parameters

      SciTech Connect (OSTI)

      Chang, Y.-S.; Liebich, R. E.; Chun, K. C.

      2000-03-31

      Residual environmental sound can mask intrusive4 (unwanted) sound. It is a factor that can affect noise impacts and must be considered both in noise-impact studies and in noise-mitigation designs. Models for quantitative prediction of sensation level (audibility) and psychological effects of intrusive noise require an input with 1/3 octave-band spectral resolution of environmental masking noise. However, the majority of published residual environmental masking-noise data are given with either octave-band frequency resolution or only single A-weighted decibel values. A model has been developed that enables estimation of 1/3 octave-band residual environmental masking-noise spectra and relates certain environmental parameters to A-weighted sound level. This model provides a correlation among three environmental conditions: measured residual A-weighted sound-pressure level, proximity to a major roadway, and population density. Cited field-study data were used to compute the most probable 1/3 octave-band sound-pressure spectrum corresponding to any selected one of these three inputs. In turn, such spectra can be used as an input to models for prediction of noise impacts. This paper discusses specific algorithms included in the newly developed computer program ENMASK. In addition, the relative audibility of the environmental masking-noise spectra at different A-weighted sound levels is discussed, which is determined by using the methodology of program ENAUDIBL.

    20. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    1. Ocean-ice/oil-weathering computer program user's manual. Final report

      SciTech Connect (OSTI)

      Kirstein, B.E.; Redding, R.T.

      1987-10-01

      The ocean-ice/oil-weathering code is written in FORTRAN as a series of stand-alone subroutines that can easily be installed on most any computer. All of the trial-and-error routines, integration routines, and other special routines are written in the code so that nothing more than the normal system functions such as EXP are required. The code is user-interactive and requests input by prompting questions with suggested input. Therefore, the user can actually learn about the nature of crude oil and oil weathering by using this code. The ocean-ice oil-weathering model considers the following weathering processes: evaporation; dispersion (oil into water); moussee (water into oil); and spreading; These processes are used to predict the mass balance and composition of oil remaining in the slick as a function of time and environmental parameters.

    2. A Computer Program for Processing In Situ Permeable Flow Sensor Data

      Energy Science and Technology Software Center (OSTI)

      1996-04-15

      FLOW4.02 is used to interpret data from In Situ Permeable Flow Sensors which are instruments that directly measure groundwater flow velocity in saturated, unconsolidated geologic formations (Ballard, 1994, 1996: Ballard et al., 1994: Ballard et al., in press). The program accepts as input the electrical resistance measurements from the thermistors incorporated within the flow sensors, converts the resistance data to temperatures and then uses the temperature information to calculate the groundwater flow velocity and associatedmore » uncertainty. The software includes many capabilities for manipulating, graphically displaying and writing to disk the raw resistance data, the temperature data and the calculated flow velocity information. This version is a major revision of a previously copyrighted version (FLOW1.0).« less

    3. TRUST: A Computer Program for Variably Saturated Flow in Multidimensional, Deformable Media

      SciTech Connect (OSTI)

      Reisenauer, A. E.; Key, K. T.; Narasimhan, T. N.; Nelson, R. W.

      1982-01-01

      The computer code, TRUST. provides a versatile tool to solve a wide spectrum of fluid flow problems arising in variably saturated deformable porous media. The governing equations express the conservation of fluid mass in an elemental volume that has a constant volume of solid. Deformation of the skeleton may be nonelastic. Permeability and compressibility coefficients may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may include hysteresis. The code developed by T. N. Narasimhan grew out of the original TRUNP code written by A. L. Edwards. The code uses an integrated finite difference algorithm for numerically solving the governing equation. Narching in time is performed by a mixed explicit-implicit numerical procedure in which the time step is internally controlled. The time step control and related feature in the TRUST code provide an effective control of the potential numerical instabilities that can arise in the course of solving this difficult class of nonlinear boundary value problem. This document brings together the equations, theory, and users manual for the code as well as a sample case with input and output.

    4. User`s manual for EROSION/MOD1: A computer program for fluids-solids erosion

      SciTech Connect (OSTI)

      Lyczkowski, R.W.; Bouillard, J.X.; Folga, S.M.; Chang, S.L.

      1992-09-01

      This report describes EROSION/MOD1, a computer program that was developed as a two-dimensional analytical tool for the general analysis of erosion in fluid-solids systems and the specific analysis of erosion in bubbling fluidized-bed combustors. Contained herein are implementations of Finnie`s impaction erosion model, Neilson and Gilchrist`s combined ductile and brittle erosion model, and several forms of the monolayer energy dissipation erosion model. These models and their implementations are described briefly. The global structure of EROSION/MOD1 that contains these models is also discussed. The input data for EROSION/MOD1 are given, and a sample problem for a fluidized bed is described. The hydrodynamic input data are assumed to come from the output of FLUFIX/MOD2.

    5. Center for Programming Models for Scalable Parallel Computing - Towards Enhancing OpenMP for Manycore and Heterogeneous Nodes

      SciTech Connect (OSTI)

      Barbara Chapman

      2012-02-01

      OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close to DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.

    6. A user`s guide to LUGSAN II. A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

      SciTech Connect (OSTI)

      Dunn, W.N.

      1998-03-01

      LUG and Sway brace ANalysis (LUGSAN) II is an analysis and database computer program that is designed to calculate store lug and sway brace loads for aircraft captive carriage. LUGSAN II combines the rigid body dynamics code, SWAY85, with a Macintosh Hypercard database to function both as an analysis and archival system. This report describes the LUGSAN II application program, which operates on the Macintosh System (Hypercard 2.2 or later) and includes function descriptions, layout examples, and sample sessions. Although this report is primarily a user`s manual, a brief overview of the LUGSAN II computer code is included with suggested resources for programmers.

    7. Programming

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Programming Programming Compiling and linking programs on Euclid. Compiling Codes How to compile and link MPI codes on Euclid. Read More » Using the ACML Math Library How to compile and link a code with the ACML library and include the $ACML environment variable. Read More » Process Limits The hard and soft process limits are listed. Read More » Last edited: 2016-04-29 11:35:11

    8. Programming

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Programming Programming Compiling Codes on Hopper Cray provides a convenient set of wrapper commands that should be used in almost all cases for compiling and linking parallel programs. Invoking the wrappers will automatically link codes with the MPI libraries and other Cray system software libraries. All the MPI and Cray system include directories are also transparently imported. This page shows examples of how to compile codes on Franklin and Hopper. Read More » Shared and Dynamic Libraries

    9. Light Water Reactor Sustainability Program: Computer-based procedure for field activities: results from three evaluations at nuclear power plants

      SciTech Connect (OSTI)

      Oxstrand, Johanna; Bly, Aaron; LeBlanc, Katya

      2014-09-01

      Nearly all activities that involve human interaction with the systems of a nuclear power plant are guided by procedures. The paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety; however, improving procedure use could yield tremendous savings in increased efficiency and safety. One potential way to improve procedure-based activities is through the use of computer-based procedures (CBPs). Computer-based procedures provide the opportunity to incorporate context driven job aids, such as drawings, photos, just-in-time training, etc into CBP system. One obvious advantage of this capability is reducing the time spent tracking down the applicable documentation. Additionally, human performance tools can be integrated in the CBP system in such way that helps the worker focus on the task rather than the tools. Some tools can be completely incorporated into the CBP system, such as pre-job briefs, placekeeping, correct component verification, and peer checks. Other tools can be partly integrated in a fashion that reduces the time and labor required, such as concurrent and independent verification. Another benefit of CBPs compared to PBPs is dynamic procedure presentation. PBPs are static documents which limits the degree to which the information presented can be tailored to the task and conditions when the procedure is executed. The CBP system could be configured to display only the relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the user down the path of relevant steps based on the current conditions. This feature will reduce the users workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. As part of the Department of Energys (DOE) Light Water Reactors Sustainability Program, researchers at Idaho National Laboratory (INL) along with partners from the nuclear industry have been investigating the design requirements for computer-based work instructions (including operations procedures, work orders, maintenance procedures, etc.) to increase efficiency, safety, and cost competitiveness of existing light water reactors.

    10. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    11. Programming

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      using MPI and OpenMP on NERSC systems, the same does not always exist for other supported parallel programming models such as UPC or Chapel. At the same time, we know that these...

    12. Programming

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      provided on the Cray systems at NERSC. The Programming Environment is managed by a meta-module named similar to "PrgEnv-gnu4.6". The "gnu" indicates that it is providing the...

    13. Mira Computational Readiness Assessment | Argonne Leadership...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ...

    14. DoE Early Career Research Program: Final Report: Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics

      SciTech Connect (OSTI)

      Farbin, Amir

      2015-07-15

      This is the final report of for DoE Early Career Research Program Grant Titled "Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics".

    15. Thermoelectric Materials by Design, Computational Theory and...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      by Design, Computational Theory and Structure Thermoelectric Materials by Design, Computational Theory and Structure 2009 DOE Hydrogen Program and Vehicle Technologies Program...

    16. Paul C. Messina | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      He led the Computational and Computer Science component of Caltech's research project funded by the Academic Strategic Alliances Program of the Accelerated Strategic Computing ...

    17. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    18. Supercomputing Challenge Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      program that teaches mid school and high school students how to use powerful computers to model real-world problems and to explore computational approaches to their...

    19. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    20. Programs & User Facilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science Programs Office of Science Programs & User Facilities ... Advanced Scientific Computing Research Applied Mathematics Co-Design Centers Exascale Co-design Center ...

    1. Programming Challenges Presentations | U.S. DOE Office of Science...

      Office of Science (SC) Website

      Programming Challenges Presentations Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming ...

    2. Programming Challenges Workshop | U.S. DOE Office of Science...

      Office of Science (SC) Website

      Programming Challenges Workshop Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming ...

    3. Final Report, Center for Programming Models for Scalable Parallel Computing: Co-Array Fortran, Grant Number DE-FC02-01ER25505

      SciTech Connect (OSTI)

      Robert W. Numrich

      2008-04-22

      The major accomplishment of this project is the production of CafLib, an 'object-oriented' parallel numerical library written in Co-Array Fortran. CafLib contains distributed objects such as block vectors and block matrices along with procedures, attached to each object, that perform basic linear algebra operations such as matrix multiplication, matrix transpose and LU decomposition. It also contains constructors and destructors for each object that hide the details of data decomposition from the programmer, and it contains collective operations that allow the programmer to calculate global reductions, such as global sums, global minima and global maxima, as well as vector and matrix norms of several kinds. CafLib is designed to be extensible in such a way that programmers can define distributed grid and field objects, based on vector and matrix objects from the library, for finite difference algorithms to solve partial differential equations. A very important extra benefit that resulted from the project is the inclusion of the co-array programming model in the next Fortran standard called Fortran 2008. It is the first parallel programming model ever included as a standard part of the language. Co-arrays will be a supported feature in all Fortran compilers, and the portability provided by standardization will encourage a large number of programmers to adopt it for new parallel application development. The combination of object-oriented programming in Fortran 2003 with co-arrays in Fortran 2008 provides a very powerful programming model for high-performance scientific computing. Additional benefits from the project, beyond the original goal, include a programto provide access to the co-array model through access to the Cray compiler as a resource for teaching and research. Several academics, for the first time, included the co-array model as a topic in their courses on parallel computing. A separate collaborative project with LANL and PNNL showed how to extend the co-array model to other languages in a small experimental version of Co-array Python. Another collaborative project defined a Fortran 95 interface to ARMCI to encourage Fortran programmers to use the one-sided communication model in anticipation of their conversion to the co-array model later. A collaborative project with the Earth Sciences community at NASA Goddard and GFDL experimented with the co-array model within computational kernels related to their climate models, first using CafLib and then extending the co-array model to use design patterns. Future work will build on the design-pattern idea with a redesign of CafLib as a true object-oriented library using Fortran 2003 and as a parallel numerical library using Fortran 2008.

    4. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    5. 5 Checks & 5 Tips for INCITE | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ...

    6. Advanced Simulation and Computing

      National Nuclear Security Administration (NNSA)

      NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction

    7. Computing and Computational Sciences Directorate - Computer Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, ...

    8. Parallel computing works

      SciTech Connect (OSTI)

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    9. Computation & Simulation > Theory & Computation > Research >...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...

    10. User's manual to the ICRP Code: a series of computer programs to perform dosimetric calculations for the ICRP Committee 2 report

      SciTech Connect (OSTI)

      Watson, S.B.; Ford, M.R.

      1980-02-01

      A computer code has been developed that implements the recommendations of ICRP Committee 2 for computing limits for occupational exposure of radionuclides. The purpose of this report is to describe the various modules of the computer code and to present a description of the methods and criteria used to compute the tables published in the Committee 2 report. The computer code contains three modules of which: (1) one computes specific effective energy; (2) one calculates cumulated activity; and (3) one computes dose and the series of ICRP tables. The description of the first two modules emphasizes the new ICRP Committee 2 recommendations in computing specific effective energy and cumulated activity. For the third module, the complex criteria are discussed for calculating the tables of committed dose equivalent, weighted committed dose equivalents, annual limit of intake, and derived air concentration.

    11. ALCF Acknowledgment Policy | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Impact on Theory and Experiment (INCITE) program. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User ...

    12. Computing Frontier: Distributed Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and using specialized high-speed, low-latency networks to communicate partial results ... possibly requiring the use of multiple Web browsers and a number of utility programs ...

    13. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Deployment of Edison was made possible in part by funding from DOE's Office of Science and the DARPA High Productivity Computing Systems program. DOE's Office of Science is the ...

    14. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      should have basic experience with a scientific computing language, such as C, C++, Fortran and with the LINUX operating system. Duration & Location The program will last ten...

    15. History | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      dedicated to enabling leading-edge computational capabilities to advance fundamental ... (ASCR) program within DOE's Office of Science, the ALCF is one half of the DOE ...

    16. Computing and Computational Sciences Directorate - Computer Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Science and Mathematics Division Citation: For exemplary administrative secretarial support to the Computer Science and Mathematics Division and to the ORNL ...

    17. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Exascale Computing » CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in research and development into energy efficient and effective processor and memory architectures for DOE's Exascale program. CAL coordinates hardware architecture R&D activities across the DOE. CAL is a joint NNSA/SC activity involving Sandia National Laboratories (CAL-Sandia) and

    18. Program Activities | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      The Advanced Simulation and Computing program (ASC) is part of ... Office of Defense Programs. Defense Programs has six components: Research, ... at making the scientific and ...

    19. TORO II: A finite element computer program for nonlinear quasi-static problems in electromagnetics: Part 2, User`s manual

      SciTech Connect (OSTI)

      Gartling, D.K.

      1996-05-01

      User instructions are given for the finite element, electromagnetics program, TORO II. The theoretical background and numerical methods used in the program are documented in SAND95-2472. The present document also describes a number of example problems that have been analyzed with the code and provides sample input files for typical simulations. 20 refs., 34 figs., 3 tabs.

    20. SCC: The Strategic Computing Complex

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SCC: The Strategic Computing Complex SCC: The Strategic Computing Complex The Strategic Computing Complex (SCC) is a secured supercomputing facility that supports the calculation, modeling, simulation, and visualization of complex nuclear weapons data in support of the Stockpile Stewardship Program. The 300,000-square-foot, vault-type building features an unobstructed 43,500-square-foot computer room, which is an open room about three-fourths the size of a football field. The Strategic Computing

    1. Method and system for knowledge discovery using non-linear statistical analysis and a 1st and 2nd tier computer program

      DOE Patents [OSTI]

      Hively, Lee M.

      2011-07-12

      The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.

    2. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.

    3. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

      SciTech Connect (OSTI)

      Zhou, Shujia; Duffy, Daniel; Clune, Thomas; Suarez, Max; Williams, Samuel; Halem, Milton

      2009-01-10

      The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratio of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.

    4. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

      SciTech Connect (OSTI)

      Xu, Tengfang; Flapper, Joris; Ke, Jing; Kramer, Klaas; Sathaye, Jayant

      2012-02-01

      The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variables affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and water usage in individual dairy plants, augment benchmarking activities in the market places, and facilitate implementation of efficiency measures and strategies to save energy and water usage in the dairy industry. Industrial adoption of this emerging tool and technology in the market is expected to benefit dairy plants, which are important customers of California utilities. Further demonstration of this benchmarking tool is recommended, for facilitating its commercialization and expansion in functions of the tool. Wider use of this BEST-Dairy tool and its continuous expansion (in functionality) will help to reduce the actual consumption of energy and water in the dairy industry sector. The outcomes comply very well with the goals set by the AB 1250 for PIER program.

    5. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2016-04-29 11:35:0

    6. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    7. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      DesignForward FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home R & D Exascale Computing Exascale Computing Moving forward into the exascale era, ...

    8. JAC3D -- A three-dimensional finite element computer program for the nonlinear quasi-static response of solids with the conjugate gradient method; Yucca Mountain Site Characterization Project

      SciTech Connect (OSTI)

      Biffle, J.H.

      1993-02-01

      JAC3D is a three-dimensional finite element program designed to solve quasi-static nonlinear mechanics problems. A set of continuum equations describes the nonlinear mechanics involving large rotation and strain. A nonlinear conjugate gradient method is used to solve the equation. The method is implemented in a three-dimensional setting with various methods for accelerating convergence. Sliding interface logic is also implemented. An eight-node Lagrangian uniform strain element is used with hourglass stiffness to control the zero-energy modes. This report documents the elastic and isothermal elastic-plastic material model. Other material models, documented elsewhere, are also available. The program is vectorized for efficient performance on Cray computers. Sample problems described are the bending of a thin beam, the rotation of a unit cube, and the pressurization and thermal loading of a hollow sphere.

    9. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Sciences Our Vision National User Facilities Research Areas In Focus Global Solutions ⇒ Navigate Section Our Vision National User Facilities Research Areas In Focus Global Solutions Computational Research Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. Scientific Networking

    10. EQ3NR, a computer program for geochemical aqueous speciation-solubility calculations: Theoretical manual, user`s guide, and related documentation (Version 7.0); Part 3

      SciTech Connect (OSTI)

      Wolery, T.J.

      1992-09-14

      EQ3NR is an aqueous solution speciation-solubility modeling code. It is part of the EQ3/6 software package for geochemical modeling. It computes the thermodynamic state of an aqueous solution by determining the distribution of chemical species, including simple ions, ion pairs, and complexes, using standard state thermodynamic data and various equations which describe the thermodynamic activity coefficients of these species. The input to the code describes the aqueous solution in terms of analytical data, including total (analytical) concentrations of dissolved components and such other parameters as the pH, pHCl, Eh, pe, and oxygen fugacity. The input may also include a desired electrical balancing adjustment and various constraints which impose equilibrium with special pure minerals, solid solution end-member components (of specified mole fractions), and gases (of specified fugacities). The code evaluates the degree of disequilibrium in terms of the saturation index (SI = 1og Q/K) and the thermodynamic affinity (A = {minus}2.303 RT log Q/K) for various reactions, such as mineral dissolution or oxidation-reduction in the aqueous solution itself. Individual values of Eh, pe, oxygen fugacity, and Ah (redox affinity) are computed for aqueous redox couples. Equilibrium fugacities are computed for gas species. The code is highly flexible in dealing with various parameters as either model inputs or outputs. The user can specify modification or substitution of equilibrium constants at run time by using options on the input file.

    11. Savannah River Ecology Laboratory - Outreach Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Program

    12. Savannah River Ecology Laboratory - Outreach Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Program

    13. Probability of pipe fracture in the primary coolant loop of a PWR plant. Volume 9. PRAISE computer code user's manual. Load Combination Program Project I final report

      SciTech Connect (OSTI)

      Lim, E.Y.

      1981-06-01

      The PRAISE (Piping Reliability Analysis Including Seismic Events) computer code estimates the influence of earthquakes on the probability of failure at a weld joint in the primary coolant system of a pressurized water reactor. Failure, either a through-wall defect (leak) or a complete pipe severance (a large-LOCA), is assumed to be caused by fatigue crack growth of an as-fabricated interior surface circumferential defect. These defects are assumed to be two-dimensional and semi-elliptical in shape. The distribution of initial crack sizes is a function of crack depth and aspect ratio. PRAISE treats the inter-arrival times of operating transients either as a constant or exponentially distributed according to observed or postulated rates. Leak rate and leak detection models are also included. The criterion for complete pipe severance is exceedance of a net section critical stress. Earthquakes of various intensity and arbitrary occurrence times can be modeled. PRAISE presently assumes that exactly one initial defect exists in the weld and that the earthquake of interest is the first earthquake experienced at the reactor. PRAISE has a very modular structure and can be tailored to a variety of crack growth and piping reliability problems. Although PRAISE was developed on a CDC-7600 computer, it was, however, coded in standard FORTRAN IV and is readily transportable to other machines.

    14. Wind Energy Program: Top 10 Program Accomplishments

      Broader source: Energy.gov [DOE]

      Brochure on the top accomplishments of the Wind Energy Program, including the development of large wind machines, small machines for the residential market, wind tunnel testing, computer codes for modeling wind systems, high definition wind maps, and successful collaborations.

    15. ALGEBRA: a computer program that algebraically manipulates finite element output data. [In extended FORTRAN for CDC 7600 or CYBER 76 only

      SciTech Connect (OSTI)

      Richgels, M A; Biffle, J H

      1980-09-01

      ALGEBRA is a program that allows the user to process output data from finite-element analysis codes before they are sent to plotting routines. These data take the form of variable values (stress, strain, and velocity components, etc.) on a tape that is both the output tape from the analyses code and the input tape to ALGEBRA. The ALGEBRA code evaluates functions of these data and writes the function values on an output tape that can be used as input to plotting routines. Convenient input format and error detection capabilities aid the user in providing ALGEBRA with the functions to be evaluated. 1 figure.

    16. Institutional computing (IC) information session

      SciTech Connect (OSTI)

      Koch, Kenneth R; Lally, Bryan R

      2011-01-19

      The LANL Institutional Computing Program (IC) will host an information session about the current state of unclassified Institutional Computing at Los Alamos, exciting plans for the future, and the current call for proposals for science and engineering projects requiring computing. Program representatives will give short presentations and field questions about the call for proposals and future planned machines, and discuss technical support available to existing and future projects. Los Alamos has started making a serious institutional investment in open computing available to our science projects, and that investment is expected to increase even more.

    17. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    18. Computing Sciences Staff Help East Bay High Schoolers Upgrade...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      IT fields, the Laney College Computer Information Systems Department offered its Upgrade: Computer Science Program. Thirty-eight students from 10 East Bay high schools registered...

    19. How to Get an Allocation | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Impact on Theory and Experiment Program Purpose: Supports computationally intensive, large-scale research projects that aim to address "grand challenges" in science ...

    20. Unsolicited Projects in 2012: Research in Computer Architecture...

      Office of Science (SC) Website

      Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I ...

    1. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    2. Stockpile Stewardship Program Quarterly Experiments | National...

      National Nuclear Security Administration (NNSA)

      a robust program of scientific inquiry used to ... models and NNSA's Advanced Simulation and Computing (ASC) Program to ... The quarterly summary prepared by NNSA's Office of ...

    3. Computer Science and Information Technology Student Pipeline

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science and Information Technology Student Pipeline Program Description Los Alamos National Laboratory's High Performance Computing and Information Technology Divisions recruit and hire promising undergraduate and graduate students in the areas of Computer Science, Information Technology, Management Information Systems, Computer Security, Software Engineering, Computer Engineering, and Electrical Engineering. Students are provided a mentor and challenging projects to demonstrate their

    4. Covered Product Category: Computers | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computers Covered Product Category: Computers The Federal Energy Management Program (FEMP) provides acquisition guidance for computers, a product category covered by the ENERGY STAR program. Federal laws and requirements mandate that agencies buy ENERGY STAR-qualified products in all product categories covered by this program and any acquisition actions that are not specifically exempted by law. MEETING EFFICIENCY REQUIREMENTS FOR FEDERAL PURCHASES The U.S. Environmental Protection Agency (EPA)

    5. Seizure control with thermal energy? Modeling of heat diffusivity in brain tissue and computer-based design of a prototype mini-cooler.

      SciTech Connect (OSTI)

      Osario, I.; Chang, F.-C.; Gopalsami, N.; Nuclear Engineering Division; Univ. of Kansas

      2009-10-01

      Automated seizure blockage is a top priority in epileptology. Lowering nervous tissue temperature below a certain level suppresses abnormal neuronal activity, an approach with certain advantages over electrical stimulation, the preferred investigational therapy for pharmacoresistant seizures. A computer model was developed to identify an efficient probe design and parameters that would allow cooling of brain tissue by no less than 21 C in 30 s, maximum. The Pennes equation and the computer code ABAQUS were used to investigate the spatiotemporal behavior of heat diffusivity in brain tissue. Arrays of distributed probes deliver sufficient thermal energy to decrease, inhomogeneously, brain tissue temperature from 37 to 20 C in 30 s and from 37 to 15 C in 60 s. Tissue disruption/loss caused by insertion of this probe is considerably less than that caused by ablative surgery. This model may be applied for the design and development of cooling devices for seizure control.

    6. TRAC-PF1/MOD1: an advanced best-estimate computer program for pressurized water reactor thermal-hydraulic analysis

      SciTech Connect (OSTI)

      Liles, D.R.; Mahaffy, J.H.

      1986-07-01

      The Los Alamos National Laboratory is developing the Transient Reactor Analysis Code (TRAC) to provide advanced best-estimate predictions of postulated accidents in light-water reactors. The TRAC-PF1/MOD1 program provides this capability for pressurized water reactors and for many thermal-hydraulic test facilities. The code features either a one- or a three-dimensional treatment of the pressure vessel and its associated internals, a two-fluid nonequilibrium hydrodynamics model with a noncondensable gas field and solute tracking, flow-regime-dependent constitutive equation treatment, optional reflood tracking capability for bottom-flood and falling-film quench fronts, and consistent treatment of entire accident sequences including the generation of consistent initial conditions. The stability-enhancing two-step (SETS) numerical algorithm is used in the one-dimensional hydrodynamics and permits this portion of the fluid dynamics to violate the material Courant condition. This technique permits large time steps and, hence, reduced running time for slow transients.

    7. Integrated Computational Materials Engineering (ICME) for Mg...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      and Vehicle Technologies Program Annual Merit Review and Peer Evaluation PDF icon lm012li2011o.pdf More Documents & Publications Integrated Computational Materials Engineering ...

    8. Computational Design of Interfaces for Photovoltaics | Argonne...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Design of Interfaces for Photovoltaics PI Name: Noa Marom PI Email: nmarom@tulane.edu Institution: Tulane University Allocation Program: ALCC Allocation Hours at...

    9. Computational Scientist | Princeton Plasma Physics Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Department, with interest in leadership class computing of gyrokinetic fusion edge plasma research. A candidate who has knowledge in hybrid parallel programming with MPI, OpenMP,...

    10. Integrated Computational Materials Engineering (ICME) for Mg...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Project (Part 1) Integrated Computational Materials Engineering (ICME) for Mg: International Pilot Project (Part 1) 2010 DOE Vehicle Technologies and Hydrogen Programs Annual Merit...

    11. Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Advanced Materials Laboratory Center for Integrated Nanotechnologies Combustion Research Facility Computational Science Research Institute Joint BioEnergy Institute About EC News ...

    12. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    13. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved within B 174. Use

    14. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Laboratory (pdf) DOENNSA Laboratories Fulfill National Mission with Trinity and Cielo Petascale Computers (pdf) Exascale Co-design Center for Materials in Extreme...

    15. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center ...

    16. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and...

    17. Computational and Theoretical Chemistry | U.S. DOE Office of...

      Office of Science (SC) Website

      Computational and Theoretical Chemistry Chemical Sciences, Geosciences, & Biosciences ... Molecular Sciences and Gas Phase Chemical Physics programs-which together comprise ...

    18. Computer, Computational, and Statistical Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Directed Research and Development (LDRD) Defense Advanced Research Projects Agency (DARPA) Defense Threat Reduction Agency (DTRA) Research Applied Computer Science Co-design ...

    19. Radiological Worker Computer Based Training

      Energy Science and Technology Software Center (OSTI)

      2003-02-06

      Argonne National Laboratory has developed an interactive computer based training (CBT) version of the standardized DOE Radiological Worker training program. This CD-ROM based program utilizes graphics, animation, photographs, sound and video to train users in ten topical areas: radiological fundamentals, biological effects, dose limits, ALARA, personnel monitoring, controls and postings, emergency response, contamination controls, high radiation areas, and lessons learned.

    20. Introduction to computers: Reference guide

      SciTech Connect (OSTI)

      Ligon, F.V.

      1995-04-01

      The ``Introduction to Computers`` program establishes formal partnerships with local school districts and community-based organizations, introduces computer literacy to precollege students and their parents, and encourages students to pursue Scientific, Mathematical, Engineering, and Technical careers (SET). Hands-on assignments are given in each class, reinforcing the lesson taught. In addition, the program is designed to broaden the knowledge base of teachers in scientific/technical concepts, and Brookhaven National Laboratory continues to act as a liaison, offering educational outreach to diverse community organizations and groups. This manual contains the teacher`s lesson plans and the student documentation to this introduction to computer course.

    1. Integrating Program Component Executables

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Integrating Program Component Executables on Distributed Memory Architectures via MPH Chris Ding and Yun He Computational Research Division, Lawrence Berkeley National Laboratory University of California, Berkeley, CA 94720, USA chqding@lbl.gov, yhe@lbl.gov Abstract A growing trend in developing large and complex ap- plications on today's Teraflop computers is to integrate stand-alone and/or semi-independent program components into a comprehensive simulation package. One example is the climate

    2. Computer-Aided Engineering for Electric Drive Vehicle Batteries (CAEBAT) (Presentation)

      SciTech Connect (OSTI)

      Pesaran, A. A.

      2011-05-01

      This presentation describes NREL's computer aided engineering program for electric drive vehicle batteries.

    3. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes There are currently 2632 nodes available on PDSF. The compute (batch) nodes at PDSF are heterogenous, reflecting the periodic procurement of new nodes (and the eventual retirement of old nodes). From the user's perspective they are essentially all equivalent except that some have more memory per job slot. If your jobs have memory requirements beyond the default maximum of 1.1GB you should specify that in your job submission and the batch system will run your job on an

    4. Computer Algebra System

      Energy Science and Technology Software Center (OSTI)

      1992-05-04

      DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less

    5. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    6. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    7. computational fluid dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational fluid dynamics - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs

    8. Cloud Computing Services

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Services - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    9. Final Report for Enhancing the MPI Programming Model for PetaScale...

      Office of Scientific and Technical Information (OSTI)

      United States Language: English Subject: 97 MATHEMATICS AND COMPUTING Parallel Computing; Message Passing Interface; Scalable Algorithms; Parallel Programming Models Word ...

    10. Multiprocessor programming environment

      SciTech Connect (OSTI)

      Smith, M.B.; Fornaro, R.

      1988-12-01

      Programming tools and techniques have been well developed for traditional uniprocessor computer systems. The focus of this research project is on the development of a programming environment for a high speed real time heterogeneous multiprocessor system, with special emphasis on languages and compilers. The new tools and techniques will allow a smooth transition for programmers with experience only on single processor systems.

    11. CAP Program Guidance

      Broader source: Energy.gov [DOE]

      In 2002, the Department of Energy signed an interagency agreement with the Department of Defense’s Computer/Electronic Accommodations Program (CAP) program to provide assistive/adaptive technology free of charge to DOE employees with disabilities. The following information regarding CAP is being provided to assist federal employees, managers and on- site disability coordinators with the CAP application process.

    12. Computer-Aided Design of Materials for use under High Temperature Operating Condition

      SciTech Connect (OSTI)

      Rajagopal, K. R.; Rao, I. J.

      2010-01-31

      The procedures in place for producing materials in order to optimize their performance with respect to creep characteristics, oxidation resistance, elevation of melting point, thermal and electrical conductivity and other thermal and electrical properties are essentially trial and error experimentation that tend to be tremendously time consuming and expensive. A computational approach has been developed that can replace the trial and error procedures in order that one can efficiently design and engineer materials based on the application in question can lead to enhanced performance of the material, significant decrease in costs and cut down the time necessary to produce such materials. The work has relevance to the design and manufacture of turbine blades operating at high operating temperature, development of armor and missiles heads; corrosion resistant tanks and containers, better conductors of electricity, and the numerous other applications that are envisaged for specially structured nanocrystalline solids. A robust thermodynamic framework is developed within which the computational approach is developed. The procedure takes into account microstructural features such as the dislocation density, lattice mismatch, stacking faults, volume fractions of inclusions, interfacial area, etc. A robust model for single crystal superalloys that takes into account the microstructure of the alloy within the context of a continuum model is developed. Having developed the model, we then implement in a computational scheme using the software ABAQUS/STANDARD. The results of the simulation are compared against experimental data in realistic geometries.

    13. Programs & User Facilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science Programs » Office of Science » Programs & User Facilities Programs & User Facilities Enabling remarkable discoveries, tools that transform our understanding of energy and matter and advance national, economic, and energy security Advanced Scientific Computing Research Applied Mathematics Co-Design Centers Exascale Co-design Center for Materials in Extreme Environments (ExMatEx) Center for Exascale Simulation of Advanced Reactors (CESAR) Center for Exascale Simulation of

    14. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes MC-proc.png Compute Node Configuration 6,384 nodes 2 twelve-core AMD 'MagnyCours' 2.1-GHz processors per node (see die image to the right and schematic below) 24 cores per node (153,216 total cores) 32 GB DDR3 1333-MHz memory per node (6,000 nodes) 64 GB DDR3 1333-MHz memory per node (384 nodes) Peak Gflop/s rate: 8.4 Gflops/core 201.6 Gflops/node 1.28 Peta-flops for the entire machine Each core has its own L1 and L2 caches, with 64 KB and 512KB respectively One 6-MB

    15. Parallel programming with PCN

      SciTech Connect (OSTI)

      Foster, I.; Tuecke, S.

      1991-12-01

      PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

    16. Program Evaluation: Program Logic

      Broader source: Energy.gov [DOE]

      Logic modeling is a thought process program evaluators have found to be useful for at least forty years and has become increasingly popular with program managers during the last decade. A logic model presents a plausible and sensible model of how the program will work under certain environmental conditions to solve identified problems. The logic model can be the basis for a convincing story of the program's expected performance – telling stakeholders and others the problem the program focuses on and how it is uniquely qualified to address it. The elements of the logic model are resources, activities, outputs, short, intermediate and longer-term outcomes. Some add the customers reached, as well as the relevant external contextual influences, present before a program begins or appearing as the program is implemented.

    17. About the Advanced Computing Tech Team | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Advanced Computing Tech Team About the Advanced Computing Tech Team The Advanced Computing Tech Team is made up of representatives from DOE and its national laboratories who are involved with developing and using advanced computing tools. The following is a list of some of those programs and what how they are currently using advanced computing in pursuit of their respective missions. Advanced Science Computing Research (ASCR) The mission of the Advanced Scientific Computing Research (ASCR)

    18. Hour of Code sparks interest in computer science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      STEM skills Community Connections: Your link to news and opportunities from Los Alamos National Laboratory Latest Issue:May 2016 all issues All Issues » submit Hour of Code sparks interest in computer science Taking the mystery out of programming February 1, 2016 Hour of Code participants work their way through fun computer programming tutorials. Hour of Code participants work their way through fun computer programming tutorials. Contacts Community Programs Director Kathy Keith Email Editor

    19. CNL Programming Considerations on Franklin

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Programming » CNL Programming Considerations on Franklin CNL Programming Considerations on Franklin Shared Libraries (not supported) The Cray XT series currently do not support dynamic loading of executable code or shared libraries. Also, the related LD_PRELOAD environment variable is not supported. It is recommened to run Shared Library applications on Hopper. GNU C Runtime Library glibc Functions The light weight OS on the compute nodes, Compute Node Linux (CNL), is designed to optimize

    20. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

    1. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Grid Computing Center interior. Grid Computing Center interior. Computing Grid Computing As high-energy physics experiments grow larger in scope, they require more computing power to process and analyze data. Laboratories purchase rooms full of computer nodes for experiments to use. But many experiments need even more capacity during peak periods . And some experiments do not need to use all of their computing power all of the time. In the early 2000s, members of Fermilab's Computing Division

    2. RATIO COMPUTER

      DOE Patents [OSTI]

      Post, R.F.

      1958-11-11

      An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.

    3. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    4. GPU Computing - Dirac.pptx

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      GPU Computing with Dirac Hemant Shukla 2 Architectural Differences 2 ALU Cache DRAM Control Logic DRAM CPU GPU 512 cores 10s t o 1 00s o f t hreads p er c ore Latency i s h idden b y f ast c ontext switching Less t han 2 0 c ores 1---2 t hreads p er c ore Latency i s h idden b y l arge c ache 3 Programming Models 3 CUDA (Compute Unified Device Architecture) OpenCL Microsoft's DirectCompute Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, MATLAB, IDL, and

    5. Multiprocessor computing for images

      SciTech Connect (OSTI)

      Cantoni, V. ); Levialdi, S. )

      1988-08-01

      A review of image processing systems developed until now is given, highlighting the weak points of such systems and the trends that have dictated their evolution through the years producing different generations of machines. Each generation may be characterized by the hardware architecture, the programmability features and the relative application areas. The need for multiprocessing hierarchical systems is discussed focusing on pyramidal architectures. Their computational paradigms, their virtual and physical implementation, their programming and software requirements, and capabilities by means of suitable languages, are discussed.

    6. Supercomputing Challenge Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Supercomputers Deploying some of the world's fastest supercomputers is among ASC's accomplishments in advanced computing. However, it is not all about speed. Each new system is engineered to bring certain capabilities to bear on the problems of modeling and simulation that will enhance the overall goals of the Science-Based Stockpile Stewardship Program. The ASC platform acquisition strategy includes two computing platform classes: Commodity Technology (CT) systems and Advanced Technology (AT)

    7. Stewardship Science Graduate Fellowship Programs | National Nuclear

      National Nuclear Security Administration (NNSA)

      Security Administration Home / content Stewardship Science Graduate Fellowship Programs The Computational Science Graduate Fellowship (CSGF) The Department of Energy Computational Science Graduate Fellowship program provides outstanding benefits and opportunities to students pursuing doctoral degrees in fields of study that use high performance computing to solve complex science and engineering problems. The program fosters a community of bright, energetic and committed Ph.D. students,

    8. Weatherization Program

      Broader source: Energy.gov [DOE]

      Residences participating in the Home Energy Rebate or New Home Rebate Program may not also participate in the Weatherization Program.

    9. Center for Computing Research Summer Research Proceedings 2015.

      SciTech Connect (OSTI)

      Bradley, Andrew Michael; Parks, Michael L.

      2015-12-18

      The Center for Computing Research (CCR) at Sandia National Laboratories organizes a summer student program each summer, in coordination with the Computer Science Research Institute (CSRI) and Cyber Engineering Research Institute (CERI).

    10. Bringing Advanced Computational Techniques to Energy Research

      SciTech Connect (OSTI)

      Mitchell, Julie C

      2012-11-17

      Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

    11. University Program in Advanced Technology | National Nuclear...

      National Nuclear Security Administration (NNSA)

      ASC at the Labs Supercomputers University Partnerships Predictive Science Academic ... ASC Program Elements Facility Operations and User Support Computational Systems & Software ...

    12. Program Structure | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      ASC at the Labs Supercomputers University Partnerships Predictive Science Academic ... ASC Program Elements Facility Operations and User Support Computational Systems & Software ...

    13. Givens Summer Associate Program | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      degree from the University of Virginia, and his doctorate in mathematics from Princeton. ... Math and Computer Science Givens Summer Associate Program "Pure mathematics is, in its ...

    14. Computing and Computational Sciences Directorate - Joint Institute...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (JICS). JICS combines the experience and expertise in theoretical and computational science and engineering, computer science, and mathematics in these two institutions and ...

    15. Program Managers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied Mathematics: Pieter Swart, T-5 Computer Science: Pat McCormick, CCS-1 Computational Partnerships: Galen Shipman, CCS-7 Basic Energy Sciences Materials Sciences & ...

    16. Development of computer graphics

      SciTech Connect (OSTI)

      Nuttall, H.E.

      1989-07-01

      The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

    17. SEP Program Planning Template ("Program Planning Template") ...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      SEP Program Planning Template ("Program Planning Template") SEP Program Planning Template ("Program Planning Template") Program Planning Template More Documents & Publications...

    18. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    19. Avanced Large-scale Integrated Computational Environment

      Energy Science and Technology Software Center (OSTI)

      1998-10-27

      The ALICE Memory Snooper is a software applications programming interface (API) and library for use in implementing computational steering systems. It allows distributed memory parallel programs to publish variables in the computation that may be accessed over the Internet. In this way, users can examine and even change the variables in their running application remotely. The API and library ensure the consistency of the variables across the distributed memory system.

    20. Argonne programming camp sparks students' scientific curiosity | Argonne

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      National Laboratory computer scientist Ti Leggett worked to design the computer programming curriculum, incorporating a mix of short lectures, computer time and hands-on activities. (Click image to view larger.) Argonne computer scientist Ti Leggett worked to design the computer programming curriculum, incorporating a mix of short lectures, computer time and hands-on activities. (Click image to view larger.) The group that attended this summer's coding camp posed with their teachers and camp

    1. Intro - High Performance Computing for 2015 HPC Annual Report

      SciTech Connect (OSTI)

      Klitsner, Tom

      2015-10-01

      The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

    2. Computational Systems & Software Environment | National Nuclear Security

      National Nuclear Security Administration (NNSA)

      Administration Computational Systems & Software Environment The mission of this national sub-program is to build integrated, balanced, and scalable computational capabilities to meet the predictive simulation requirements of NNSA. This sub-program strives to provide users of ASC computing resources a stable and seamless computing environment for all ASC-deployed platforms. Along with these powerful systems that ASC will maintain and field the supporting software infrastructure that the

    3. Computational and Experimental Screening of Mixed-Metal Perovskite

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Administration Computational Systems & Software Environment The mission of this national sub-program is to build integrated, balanced, and scalable computational capabilities to meet the predictive simulation requirements of NNSA. This sub-program strives to provide users of ASC computing resources a stable and seamless computing environment for all ASC-deployed platforms. Along with these powerful systems that ASC will maintain and field the supporting software infrastructure that the

    4. INCITE grants awarded to 56 computational research projects ...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      "The INCITE program drives some of the world's most ambitious and groundbreaking computational research in science and engineering," said James Hack, director of the National ...

    5. Computer System, Cluster, and Networking Summer Institute Projects

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Programs CSCNSI CSCNSI Projects Computer System, Cluster, and Networking Summer Institute Projects Present and past projects Contact Leader Stephan Eidenbenz (505) 667-3742...

    6. ASCR Leadership Computing Challenge Requests for Time Due February...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      laboratories, academia and industry. This program allocates time at NERSC and the Leadership Computing Facilities at Argonne and Oak Ridge. Areas of interest are: Advancing...

    7. User Advisory Council | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      About Overview History Staff Directory Our Teams User Advisory Council Careers Margaret Butler Fellowship Visiting Us Contact Us User Advisory Council The User Advisory Council meets regularly to review major policies and to provide user feedback to the facility leadership. All council members are active Principal Investigators or users of ALCF computational resources through one or more of the allocation programs. Martin Berzins Professor Department of Computer Science Scientific Computing and

    8. Program predicts waterflooding performance

      SciTech Connect (OSTI)

      Fassihi, M.R.; O'Brien, W.J.

      1987-04-01

      Water is a handheld calculator program for estimating waterflooding performance in a multilayered oil reservoir for patterns such as five-spot, direct line drive and staggered line drive. Topics considered in this paper include oil wells, sweep efficiency, well stimulation, computer calculations, stratification, enhanced recovery, calculators, reservoir rock, and reservoir engineering.

    9. Vehicle Technologies Office Merit Review 2014: Significant Enhancement of Computational Efficiency in Nonlinear Multiscale Battery Model for Computer Aided Engineering

      Broader source: Energy.gov [DOE]

      Presentation given by NREL at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about significant enhancement of computational...

    10. Computational Quantum Chemistry at the RCC | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility Computational Quantum Chemistry at the RCC Start Date: May 12 2016 - 2:00pm to 3:30pm Building/Room: Kathleen A. Zar Room, John Crerar Library Location: University of Chicago Speaker(s): Jonathan Skone Speaker(s) Title: Scientific Programming Consultant, Research Computing Center Event Website: https://training.uchicago.edu/course_detail.cfm?course_id=1652 This workshop is meant to guide those less familiar with quantum chemistry software in setting themselves up quickly to begin

    11. Parallel programming with PCN

      SciTech Connect (OSTI)

      Foster, I.; Tuecke, S.

      1993-01-01

      PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

    12. Program evaluation: Weatherization Residential Assistance Partnership (WRAP) Program

      SciTech Connect (OSTI)

      Jacobson, Bonnie B.; Lundien, Barbara; Kaufman, Jeffrey; Kreczko, Adam; Ferrey, Steven; Morgan, Stephen

      1991-12-01

      The Weatherization Residential Assistance Partnership,'' or WRAP program, is a fuel-blind conservation program designed to assist Northeast Utilities' low-income customers to use energy safely and efficiently. Innovative with respect to its collaborative approach and its focus on utilizing and strengthening the existing low-income weatherization service delivery network, the WRAP program offers an interesting model to other utilities which traditionally have relied on for-profit energy service contractors and highly centralized program implementation structures. This report presents appendices with surveys, participant list, and computers program to examine and predict potential energy savings.

    13. Program Administration

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1997-08-21

      This volume describes program administration that establishes and maintains effective organizational management and control of the emergency management program. Canceled by DOE G 151.1-3.

    14. ASCR Workshop on Quantum Computing for Science

      SciTech Connect (OSTI)

      Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

      2015-06-01

      This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

    15. Cosmic Reionization On Computers | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Simulation of cosmic reionization Simulation of cosmic reionization. Dark red shows opaque neutral gas, transparent blue is ionized gas, and yellow dots are galaxies. Nick Gnedin, Fermilab Cosmic Reionization On Computers PI Name: Nickolay Gnedin PI Email: gnedin@fnal.gov Institution: Fermilab Allocation Program: INCITE Allocation Hours at ALCF: 65 Million Year: 2016 Research Domain: Physics Cosmic reionization, the most recent phase transition in the history of the universe, is the process by

    16. Reactor Safety Research Programs

      SciTech Connect (OSTI)

      Edler, S. K.

      1981-07-01

      This document summarizes the work performed by Pacific Northwest Laboratory (PNL) from January 1 through March 31, 1981, for the Division of Reactor Safety Research within the U.S. Nuclear Regulatory Commission (NRC). Evaluations of nondestructive examination (NDE) techniques and instrumentation are reported; areas of investigation include demonstrating the feasibility of determining the strength of structural graphite, evaluating the feasibility of detecting and analyzing flaw growth in reactor pressure boundary systems, examining NDE reliability and probabilistic fracture mechanics, and assessing the integrity of pressurized water reactor (PWR) steam generator tubes where service-induced degradation has been indicated. Experimental data and analytical models are being provided to aid in decision-making regarding pipeto- pipe impacts following postulated breaks in high-energy fluid system piping. Core thermal models are being developed to provide better digital codes to compute the behavior of full-scale reactor systems under postulated accident conditions. Fuel assemblies and analytical support are being provided for experimental programs at other facilities. These programs include loss-ofcoolant accident (LOCA) simulation tests at the NRU reactor, Chalk River, Canada; fuel rod deformation, severe fuel damage, and postaccident coolability tests for the ESSOR reactor Super Sara Test Program, Ispra, Italy; the instrumented fuel assembly irradiation program at Halden, Norway; and experimental programs at the Power Burst Facility, Idaho National Engineering Laboratory (INEL). These programs will provide data for computer modeling of reactor system and fuel performance during various abnormal operating conditions.

    17. Visiting Faculty Program Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Visiting Faculty Program Program Description The Visiting Faculty Program seeks to increase the research competitiveness of faculty members and their students at institutions historically underrepresented in the research community in order to expand the workforce vital to Department of Energy mission areas. As part of the program, selected university/college faculty members collaborate with DOE laboratory research staff on a research project of mutual interest. Program Objective The program is

    18. Ten Projects Awarded NERSC Allocations under DOE's ALCC Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Ten Projects Awarded NERSC Allocations under DOE's ALCC Program Ten Projects Awarded NERSC Allocations under DOE's ALCC Program June 24, 2014 43251113992ff3baa1edb NERSC Computer Room. Photo by Roy Kaltschmidt, LBNL Under the Department of Energy's (DOE) ASCR Leadership Computing Challenge (ALCC) program, 10 research teams at national laboratories and universities have been awarded 382.5 million hours of computing time at the National Energy Research Scientific Computing Center (NERSC). The

    19. Computational Electronics and Electromagnetics

      SciTech Connect (OSTI)

      DeFord, J.F.

      1993-03-01

      The Computational Electronics and Electromagnetics thrust area is a focal point for computer modeling activities in electronics and electromagnetics in the Electronics Engineering Department of Lawrence Livermore National Laboratory (LLNL). Traditionally, they have focused their efforts in technical areas of importance to existing and developing LLNL programs, and this continues to form the basis for much of their research. A relatively new and increasingly important emphasis for the thrust area is the formation of partnerships with industry and the application of their simulation technology and expertise to the solution of problems faced by industry. The activities of the thrust area fall into three broad categories: (1) the development of theoretical and computational models of electronic and electromagnetic phenomena, (2) the development of useful and robust software tools based on these models, and (3) the application of these tools to programmatic and industrial problems. In FY-92, they worked on projects in all of the areas outlined above. The object of their work on numerical electromagnetic algorithms continues to be the improvement of time-domain algorithms for electromagnetic simulation on unstructured conforming grids. The thrust area is also investigating various technologies for conforming-grid mesh generation to simplify the application of their advanced field solvers to design problems involving complicated geometries. They are developing a major code suite based on the three-dimensional (3-D), conforming-grid, time-domain code DSI3D. They continue to maintain and distribute the 3-D, finite-difference time-domain (FDTD) code TSAR, which is installed at several dozen university, government, and industry sites.

    20. Visiting Faculty Program Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      covers stipend and travel reimbursement for the 10-week program. Teacherfaculty participants: 1 Program Coordinator: Scott Robbins Email: srobbins@lanl.gov Phone number: 663-5621...

    1. TRIDAC host computer functional specification

      SciTech Connect (OSTI)

      Hilbert, S.M.; Hunter, S.L.

      1983-08-23

      The purpose of this document is to outline the baseline functional requirements for the Triton Data Acquisition and Control (TRIDAC) Host Computer Subsystem. The requirements presented in this document are based upon systems that currently support both the SIS and the Uranium Separator Technology Groups in the AVLIS Program at the Lawrence Livermore National Laboratory and upon the specific demands associated with the extended safe operation of the SIS Triton Facility.

    2. GPU COMPUTING FOR PARTICLE TRACKING

      SciTech Connect (OSTI)

      Nishimura, Hiroshi; Song, Kai; Muriki, Krishna; Sun, Changchun; James, Susan; Qin, Yong

      2011-03-25

      This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculation of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.

    3. NERSC HPC Program Requirements Reviews Overview

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Overview NERSC HPC Program Requirements Reviews Overview Scope These workshops are focused on determining the computational challenges facing research teams and the computational resources scientists will need to meet their research objectives. The goal is to assure that NERSC, the DOE Office of Science, and its program offices, will be able to provide the high performance computing and storage resources necessary to support the Office of Science's scientific goals. The merits of the scientific

    4. Applications of Parallel Computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course...

    5. ESnet Program Plan 1994

      SciTech Connect (OSTI)

      Merola, S.

      1994-11-01

      This Program Plan characterizes ESnet with respect to the current and future needs of Energy Research programs for network infrastructure, services, and development. In doing so, this document articulates the vision and recommendations of the ESnet Steering Committee regarding ESnet`s development and its support of computer networking facilities and associated user services. To afford the reader a perspective from which to evaluate the ever-increasing utility of networking to the Energy Research community, we have also provided a historical overview of Energy Research networking. Networking has become an integral part of the work of DOE principal investigators, and this document is intended to assist the Office of Scientific Computing in ESnet program planning and management, including prioritization and funding. In particular, we identify the new directions that ESnet`s development and implementation will take over the course of the next several years. Our basic goal is to ensure that the networking requirements of the respective scientific programs within Energy Research are addressed fairly. The proliferation of regional networks and additional network-related initiatives by other Federal agencies is changing the process by which we plan our own efforts to serve the DOE community. ESnet provides the Energy Research community with access to many other peer-level networks and to a multitude of other interconnected network facilities. ESnet`s connectivity and relationship to these other networks and facilities are also described in this document. Major Office of Energy Research programs are managed and coordinated by the Office of Basic Energy Sciences, the Office of High Energy and Nuclear Physics, the Office of Magnetic Fusion Energy, the Office of Scientific Computing, and the Office of Health and Environmental Research. Summaries of these programs are presented, along with their functional and technical requirements for wide-area networking.

    6. ASC Program Elements | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      Computing ASC Program Elements Established in 1995, the Advanced Simulation and Computing (ASC) Program supports the Department of Energy's National Nuclear Security Administration (NNSA) Defense Programs' shift in emphasis from test-based confidence to simulation-based confidence. Under ASC, scientific simulation capabilities are developed to analyze and predict the performance, safety, and reliability of nuclear weapons and to certify their functionality. ASC integrates the work of three

    7. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental ...

    8. Energy Aware Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX Home R & D Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy ...

    9. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    10. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

    11. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    12. ASCR Leadership Computing Challenge Requests for Time Due February 14

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Requests for Time Due February 14 ASCR Leadership Computing Challenge Requests for Time Due February 14 November 17, 2011 by Francesca Verdier The ASCR Leadership Computing Challenge (ALCC) program is open to scientists from the research community in national laboratories, academia and industry. This program allocates time at NERSC and the Leadership Computing Facilities at Argonne and Oak Ridge. Areas of interest are: Advancing the clean energy agenda. Understanding the environmental impacts of

    13. Previous Computer Science Award Announcements | U.S. DOE Office of Science

      Office of Science (SC) Website

      (SC) Previous Computer Science Award Announcements Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing

    14. Argonne's Laboratory computing center - 2007 annual report.

      SciTech Connect (OSTI)

      Bair, R.; Pieper, G. W.

      2008-05-28

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    15. Computing for Finance

      SciTech Connect (OSTI)

      2010-03-24

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    16. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    17. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zrich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    18. Apply for the Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Parallel Computing » How to Apply Apply for the Parallel Computing Summer Research Internship Creating next-generation leaders in HPC research and applications development Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nicole Aguilar Garcia (505) 665-3048 Email Current application deadline is February 5, 2016 with notification by early March 2016. Who can apply? Upper division undergraduate

    19. Program Automation

      Broader source: Energy.gov [DOE]

      Better Buildings Residential Network Data and Evaluation Peer Exchange Call Series: Program Automation, Call Slides and Discussion Summary, November 21, 2013. This data and evaluation peer exchange call discussed program automation.

    20. SC11 Education Program Applications due July 31

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SC11 Education Program Applications due July 31 SC11 Education Program Applications due July 31 June 9, 2011 by Francesca Verdier Applications for the Education Program are now being accepted. Submission website: https://submissions.supercomputing.org Applications deadline: Sunday, July 31, 2011 Acceptance Notifications: Monday, August 22, 2011 The Education Program is hosting a four-day intensive program that will immerse participants in High Performance Computing (HPC) and Computational and

    1. Retiree Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Library Services » Retiree Program Retiree Program The Research Library offers a 1 year library card to retired LANL employees that allows usage of Library materials. This service is only available to retired LANL employees. Who is eligible? Any Laboratory retiree, not participating in any other program (ie, Guest Scientist, Affiliate). Upon completion of your application, you will be notified of your acceptance into the program. This does not include past students. What is the term of the

    2. Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SciTech Connect Program Benefits of Individual EERE Programs. FY 2010 Citation Details In-Document Search Title: Program Benefits of Individual EERE Programs. FY 2010 This collection of data tables shows the benefits metrics related to energy security, environmental impacts, and economic impacts for individual renewable energy technologies in the EERE portfolio. Data are presented for the years 2015, 2020, 2030, and 2050, for both the NEMS and MARKAL models. Authors: None, None Publication

    3. HVAC Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      New Commercial Program Development Commercial Current Promotions Industrial Federal Agriculture Heating Ventilation and Air Conditioning Energy efficient Heating Ventilation and...

    4. TORCH Computational Reference Kernels - A Testbed for Computer Science Research

      SciTech Connect (OSTI)

      Kaiser, Alex; Williams, Samuel Webb; Madduri, Kamesh; Ibrahim, Khaled; Bailey, David H.; Demmel, James W.; Strohmaier, Erich

      2010-12-02

      For decades, computer scientists have sought guidance on how to evolve architectures, languages, and programming models in order to improve application performance, efficiency, and productivity. Unfortunately, without overarching advice about future directions in these areas, individual guidance is inferred from the existing software/hardware ecosystem, and each discipline often conducts their research independently assuming all other technologies remain fixed. In today's rapidly evolving world of on-chip parallelism, isolated and iterative improvements to performance may miss superior solutions in the same way gradient descent optimization techniques may get stuck in local minima. To combat this, we present TORCH: A Testbed for Optimization ResearCH. These computational reference kernels define the core problems of interest in scientific computing without mandating a specific language, algorithm, programming model, or implementation. To compliment the kernel (problem) definitions, we provide a set of algorithmically-expressed verification tests that can be used to verify a hardware/software co-designed solution produces an acceptable answer. Finally, to provide some illumination as to how researchers have implemented solutions to these problems in the past, we provide a set of reference implementations in C and MATLAB.

    5. Computing and Computational Sciences Directorate - Information...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      cost-effective, state-of-the-art computing capabilities for research and development. ... communicates and manages strategy, policy and finance across the portfolio of IT assets. ...

    6. Computational Science and Engineering

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science and Engineering NETL's Computational Science and Engineering competency consists of conducting applied scientific research and developing physics-based simulation models, methods, and tools to support the development and deployment of novel process and equipment designs. Research includes advanced computations to generate information beyond the reach of experiments alone by integrating experimental and computational sciences across different length and time scales. Specific

    7. Programming models

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Task-based models Task-based models and abstractions (such as offered by CHARM++, Legion and HPX, for example) offer many attractive features for mapping computations onto...

    8. High performance computing and communications: FY 1997 implementation plan

      SciTech Connect (OSTI)

      NONE

      1996-12-01

      The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

    9. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    10. Timothy Williams | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Timothy Williams Deputy Director of Science Timothy Williams Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 2129 Argonne, IL 60439 630-252-1154 tjwilliams@anl.gov http://alcf.anl.gov/~zippy Tim Williams is a computational scientist at the Argonne Leadership Computing Facility (ALCF), where he serves as Deputy Director of Science. He is manager of the Early Science Program, which prepares scientific applications for early use of the facility's next-generation

    11. Computational Spectroscopy of Heterogeneous Interfaces | Argonne Leadership

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Facility Complex interfaces between nanoparticles and a solvent Complex interfaces between nanoparticles and a solvent. N. Brawand, University of Chicago Computational Spectroscopy of Heterogeneous Interfaces PI Name: Giulia Galli PI Email: gagalli@uchicago.edu Institution: University of Chicago Allocation Program: INCITE Allocation Hours at ALCF: 150 Million Year: 2016 Research Domain: Materials Science The interfaces between solids, nanoparticles and liquids play a fundamental

    12. Large Scale Production Computing and Storage Requirements for Fusion Energy

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences" is organized by the Department of Energy's Office of Fusion Energy Sciences (FES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to

    13. Large Scale Production Computing and Storage Requirements for High Energy

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Physics: Target 2017 Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 HEPlogo.jpg The NERSC Program Requirements Review "Large Scale Computing and Storage Requirements for High Energy Physics" is organized by the Department of Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to characterize

    14. Unsolicited Projects in 2012: Research in Computer Architecture, Modeling,

      Office of Science (SC) Website

      and Evolving MPI for Exascale | U.S. DOE Office of Science (SC) 2: Research in Computer Architecture, Modeling, and Evolving MPI for Exascale Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities

    15. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege o

    16. Exploring HPCS Languages in Scientific Computing

      SciTech Connect (OSTI)

      Barrett, Richard F; Alam, Sadaf R; de Almeida, Valmor F; Bernholdt, David E; Elwasif, Wael R; Kuehn, Jeffery A; Poole, Stephen W; Shet, Aniruddha G

      2008-01-01

      As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else.

    17. Scalable Computer Performance and Analysis (Hierarchical INTegration)

      Energy Science and Technology Software Center (OSTI)

      1999-09-02

      HINT is a program to measure a wide variety of scalable computer systems. It is capable of demonstrating the benefits of using more memory or processing power, and of improving communications within the system. HINT can be used for measurement of an existing system, while the associated program ANALYTIC HINT can be used to explain the measurements or as a design tool for proposed systems.

    18. Science at ALCF | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Three-dimensional view of shock reflection in a square tube First-Principles Simulations of High-Speed Combustion and Detonation Alexei Khokhlov Allocation Program: INCITE Allocation Hours: 140 Million Science at ALCF Allocation Program - Any - INCITE ALCC ESP Director's Discretionary Year Year -Year 2008 2009 2010 2011 2012 2013 2014 2015 2016 Research Domain - Any - Physics Mathematics Computer Science Chemistry Earth Science Energy Technologies Materials Science Engineering Biological

    19. Towards Energy-Centric Computing and Computer Architecture

      SciTech Connect (OSTI)

      2011-02-09

      Technology forecasts indicate that device scaling will continue well into the next decade. Unfortunately, it is becoming extremely difficult to harness this increase in the number of transistorsinto performance due to a number of technological, circuit, architectural, methodological and programming challenges.In this talk, I will argue that the key emerging showstopper is power. Voltage scaling as a means to maintain a constant power envelope with an increase in transistor numbers is hitting diminishing returns. As such, to continue riding the Moore's law we need to look for drastic measures to cut power. This is definitely the case for server chips in future datacenters,where abundant server parallelism, redundancy and 3D chip integration are likely to remove programming, reliability and bandwidth hurdles, leaving power as the only true limiter.I will present results backing this argument based on validated models for future server chips and parameters extracted from real commercial workloads. Then I use these results to project future research directions for datacenter hardware and software.About the speakerBabak Falsafi is a Professor in the School of Computer and Communication Sciences at EPFL, and an Adjunct Professor of Electrical and Computer Engineering and Computer Science at Carnegie Mellon. He is thefounder and the director ofthe Parallel Systems Architecture Laboratory (PARSA) at EPFL where he conducts research onarchitectural support for parallel programming, resilient systems, architectures to break the memory wall, and analytic and simulation tools for computer system performance evaluation.In 1999, in collaboration with T. N. Vijaykumar he showed for the first time that, contrary to conventional wisdom,multiprocessors do not needrelaxed memory consistency models (and the resulting convoluted programming interfaces found and used in modern systems) to achieve high performance. He is a recipient of an NSF CAREER award in 2000, IBM Faculty Partnership Awards between 2001 and 2004, and an Alfred P. Sloan Research Fellowship in 2004. He is a senior member of IEEE and ACM.

    20. Extreme Scale Computing to Secure the Nation

      SciTech Connect (OSTI)

      Brown, D L; McGraw, J R; Johnson, J R; Frincke, D

      2009-11-10

      Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national security requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This will be accomplished through significant increases in the scientific bases that underlie the computational tools. Computer codes must be developed that replace phenomenology with increased levels of scientific understanding together with an accompanying quantification of uncertainty. These advanced codes will place significantly higher demands on the computing infrastructure than do the current 3D ASC codes. This article discusses not only the need for a future computing capability at the exascale for the SBSS program, but also considers high performance computing requirements for broader national security questions. For example, the increasing concern over potential nuclear terrorist threats demands a capability to assess threats and potential disablement technologies as well as a rapid forensic capability for determining a nuclear weapons design from post-detonation evidence (nuclear counterterrorism).

    1. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw; Gokhale, Maya B.; McCabe, Kevin Peter

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    2. PACKAGE (Plasma Analysis, Chemical Kinetics and Generator Efficiency): a computer program for the calculation of partial chemical equilibrium/partial chemical rate controlled composition of multiphased mixtures under one dimensional steady flow

      SciTech Connect (OSTI)

      Yousefian, V.; Weinberg, M.H.; Haimes, R.

      1980-02-01

      The NASA CEC Code was the starting point for PACKAGE, whose function is to evaluate the composition of a multiphase combustion product mixture under the following chemical conditions: (1) total equilibrium with pure condensed species; (2) total equilibrium with ideal liquid solution; (3) partial equilibrium/partial finite rate chemistry; and (4) fully finite rate chemistry. The last three conditions were developed to treat the evolution of complex mixtures such as coal combustion products. The thermodynamic variable pairs considered are either pressure (P) and enthalpy, P and entropy, at P and temperature. Minimization of Gibbs free energy is used. This report gives detailed discussions of formulation and input/output information used in the code. Sample problems are given. The code development, description, and current programming constraints are discussed. (DLC)

    3. Computational Nanophotonics: Model Optical Interactions and Transport in Tailored Nanosystem Architectures

      SciTech Connect (OSTI)

      Stockman, Mark; Gray, Steven

      2014-02-21

      The program is directed toward development of new computational approaches to photoprocesses in nanostructures whose geometry and composition are tailored to obtain desirable optical responses. The emphasis of this specific program is on the development of computational methods and prediction and computational theory of new phenomena of optical energy transfer and transformation on the extreme nanoscale (down to a few nanometers).

    4. Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Program Description SAGE, the Summer of Applied Geophysical Experience, is a unique educational program designed to introduce students in geophysics and related fields to "hands on" geophysical exploration and research. The program emphasizes both teaching of field methods and research related to basic science and a variety of applied problems. SAGE is hosted by the National Security Education Center and the Earth and Environmental Sciences Division of the Los Alamos National

    5. exercise program

      National Nuclear Security Administration (NNSA)

      and dispose of many different hazardous substances, including radioactive materials, toxic chemicals, and biological agents and toxins.

      There are a few programs NNSA uses...

    6. Counterintelligence Program

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1992-09-04

      To establish the policies, procedures, and specific responsibilities for the Department of Energy (DOE) Counterintelligence (CI) Program. This directive does not cancel any other directive.

    7. Counterintelligence Program

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      2004-12-10

      The Order establishes Counterintelligence Program requirements and responsibilities for the Department of Energy, including the National Nuclear Security Administration. Supersedes DOE 5670.3.

    8. Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied Geophysical Experience, is a unique educational program designed to introduce students in geophysics and related fields to "hands on" geophysical exploration and research....

    9. Volunteer Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      National VolunteerMatch Retired and Senior Volunteer Program United Way of Northern New Mexico United Way of Santa Fe County Giving Employee Giving Campaign Holiday Food Drive...

    10. Programming Stage

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1997-05-21

      This chapter addresses plans for the acquisition and installation of operating environment hardware and software and design of a training program.

    11. Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      their potential and pursue opportunities in science, technology, engineering and mathematics. Through Expanding Your Horizon (EYH) Network programs, we provide STEM role models...

    12. Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Program Description Inspiring girls to recognize their potential and pursue opportunities in science, technology, engineering and mathematics. Through Expanding Your Horizon (EYH) ...

    13. Special Programs

      Office of Energy Efficiency and Renewable Energy (EERE)

      Headquarters Human Resources Operations promotes a variety of hiring flexibilities for managers to attract a diverse workforce, from Student Internship Program opportunities (Pathways), Veteran...

    14. Fermilab | Science at Fermilab | Computing | High-performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lattice QCD Farm at the Grid Computing Center at Fermilab. Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer can perform billions of multiplication and addition operations each second. High-performance parallel computing becomes necessary when computations become too large or too long to complete on a single such machine. In parallel computing, computations are divided up so that many computers can work on the same problem at

    15. State Energy Program & Weatherization Assistance Program: Update

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      eere.energy.gov AnnaMaria Garcia Acting Program Manager for DOE's Weatherization & Intergovernmental Program Weatherization and Intergovernmental Program State Energy Program & ...

    16. HSS Voluntary Protection Program: Articles

      Broader source: Energy.gov [DOE]

      AJHA Program - The Automated Job Hazard Analysis (AJHA) computer program is part of an enhanced work planning process employed at the Department of Energy's Hanford worksite. The AJHA system is routinely used to performed evaluations for medium and high risk work, and in the development of corrective maintenance work packages at the site. The tool is designed to ensure that workers are fully involved in identifying the hazards, requirements, and controls associated with tasks.

    17. An Arbitrary Precision Computation Package

      Energy Science and Technology Software Center (OSTI)

      2003-06-14

      This package permits a scientist to perform computations using an arbitrarily high level of numeric precision (the equivalent of hundreds or even thousands of digits), by making only minor changes to conventional C++ or Fortran-90 soruce code. This software takes advantage of certain properties of IEEE floating-point arithmetic, together with advanced numeric algorithms, custom data types and operator overloading. Also included in this package is the "Experimental Mathematician's Toolkit", which incorporates many of these facilitiesmore » into an easy-to-use interactive program.« less

    18. Low latency, high bandwidth data communications between compute nodes in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

      2010-11-02

      Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

    19. Cognitive Computing for Security.

      SciTech Connect (OSTI)

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    20. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    1. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost 3 on the TRACC Cluster Oct. ... with an emphasis on applying these capabilities to build computationally efficient models. ...

    2. NV Energy -Energy Smart Schools Program | Department of Energy

      Broader source: Energy.gov (indexed) [DOE]

      pending approval Vending Machine Controls Personal Computing Equipment Program Info Sector Name Utility Administrator Nevada Power Company Website http:www.nvenergy.com...

    3. ASCR Program Documents | U.S. DOE Office of Science (SC)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Program Documents Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources ASCR Discovery Monthly News Roundup News Archives ASCR Program Documents ASCR Program Documents Archive HPC Workshop Series ASCR Workshops and Conferences ASCR Presentations 100Gbps Science Network Related Links Contact Information Advanced Scientific Computing

    4. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    5. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    6. Cupola Furnace Computer Process Model

      SciTech Connect (OSTI)

      Seymour Katz

      2004-12-31

      The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

    7. Overview of the Defense Programs Research and Technology Development Program for fiscal year 1993. Appendix materials

      SciTech Connect (OSTI)

      Not Available

      1993-09-30

      The pages that follow contain summaries of the nine R&TD Program Element Plans for Fiscal Year 1993 that were completed in the Spring of 1993. The nine program elements are aggregated into three program clusters as follows: Design Sciences and Advanced Computation; Advanced Manufacturing Technologies and Capabilities; and Advanced Materials Sciences and Technology.

    8. Student Internship Programs Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Student Internship Programs Program Description The objective of the Laboratory's student internship programs is to provide students with opportunities for meaningful hands- on experience supporting educational progress in their selected scientific or professional fields. The most significant impact of these internship experiences is observed in the intellectual growth experienced by the participants. Student interns are able to appreciate the practical value of their education efforts in their

    9. Programming Challenges Presentations | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Programming Challenges Presentations Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee

    10. Mathematical and Computational Epidemiology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    11. BNL ATLAS Grid Computing

      ScienceCinema (OSTI)

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    12. Computing environment logbook

      DOE Patents [OSTI]

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    13. Webinar: AspireIT K-12 Outreach Program

      Broader source: Energy.gov [DOE]

      AspireIT K-12 Outreach Program is a grant that connects high school and college women with K-12 girls interested in computing. Using a near-peer model, program leaders teach younger girls...

    14. Program Description | Robotics Internship Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      March 4, 2016. Apply Now for the Robotics Internship About the Internship Program Description Start of Appointment Renewal of Appointment End of Appointment Stipend Information...

    15. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

      SciTech Connect (OSTI)

      Corones, James

      2013-09-23

      High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

    16. Parallel computing in enterprise modeling.

      SciTech Connect (OSTI)

      Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

      2008-08-01

      This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

    17. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    18. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    19. NNSA releases Stockpile Stewardship Program quarterly experiments summary |

      National Nuclear Security Administration (NNSA)

      National Nuclear Security Administration releases Stockpile Stewardship Program quarterly experiments summary May 12, 2015 WASHIGTON, DC. - The National Nuclear Security Administration today released its current quarterly summary of experiments conducted as part of its science-based Stockpile Stewardship Program. The experiments carried out within the program are used in combination with complex computational models and NNSA's Advanced Simulation and Computing (ASC) Program to assess the

    20. Final Report: Correctness Tools for Petascale Computing

      SciTech Connect (OSTI)

      Mellor-Crummey, John

      2014-10-27

      In the course of developing parallel programs for leadership computing systems, subtle programming errors often arise that are extremely difficult to diagnose without tools. To meet this challenge, University of Maryland, the University of Wisconsin—Madison, and Rice University worked to develop lightweight tools to help code developers pinpoint a variety of program correctness errors that plague parallel scientific codes. The aim of this project was to develop software tools that help diagnose program errors including memory leaks, memory access errors, round-off errors, and data races. Research at Rice University focused on developing algorithms and data structures to support efficient monitoring of multithreaded programs for memory access errors and data races. This is a final report about research and development work at Rice University as part of this project.

    1. NNSA?s Computing Strategy, Acquisition Plan, and Basis for Computing Time Allocation

      SciTech Connect (OSTI)

      Nikkel, D J

      2009-07-21

      This report is in response to the Omnibus Appropriations Act, 2009 (H.R. 1105; Public Law 111-8) in its funding of the National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) Program. This bill called for a report on ASC's plans for computing and platform acquisition strategy in support of stockpile stewardship. Computer simulation is essential to the stewardship of the nation's nuclear stockpile. Annual certification of the country's stockpile systems, Significant Finding Investigations (SFIs), and execution of Life Extension Programs (LEPs) are dependent on simulations employing the advanced ASC tools developed over the past decade plus; indeed, without these tools, certification would not be possible without a return to nuclear testing. ASC is an integrated program involving investments in computer hardware (platforms and computing centers), software environments, integrated design codes and physical models for these codes, and validation methodologies. The significant progress ASC has made in the past derives from its focus on mission and from its strategy of balancing support across the key investment areas necessary for success. All these investment areas must be sustained for ASC to adequately support current stockpile stewardship mission needs and to meet ever more difficult challenges as the weapons continue to age or undergo refurbishment. The appropriations bill called for this report to address three specific issues, which are responded to briefly here but are expanded upon in the subsequent document: (1) Identify how computing capability at each of the labs will specifically contribute to stockpile stewardship goals, and on what basis computing time will be allocated to achieve the goal of a balanced program among the labs. (2) Explain the NNSA's acquisition strategy for capacity and capability of machines at each of the labs and how it will fit within the existing budget constraints. (3) Identify the technical challenges facing the program and a strategy to resolve them.

    2. Semiconductor Device Analysis on Personal Computers

      Energy Science and Technology Software Center (OSTI)

      1993-02-08

      PC-1D models the internal operation of bipolar semiconductor devices by solving for the concentrations and quasi-one-dimensional flow of electrons and holes resulting from either electrical or optical excitation. PC-1D uses the same detailed physical models incorporated in mainframe computer programs, yet runs efficiently on personal computers. PC-1D was originally developed with DOE funding to analyze solar cells. That continues to be its primary mode of usage, with registered copies in regular use at more thanmore » 100 locations worldwide. The program has been successfully applied to the analysis of silicon, gallium-arsenide, and indium-phosphide solar cells. The program is also suitable for modeling bipolar transistors and diodes, including heterojunction devices. Its easy-to-use graphical interface makes it useful as a teaching tool as well.« less

    3. Back to the ASCR Program Documents Page | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      ASCR Program Documents » ASCR Program Documents Archive Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources ASCR Discovery Monthly News Roundup News Archives ASCR Program Documents ASCR Program Documents Archive HPC Workshop Series ASCR Workshops and Conferences ASCR Presentations 100Gbps Science Network Related Links Contact

    4. Programming models

      SciTech Connect (OSTI)

      Daniel, David J; Mc Pherson, Allen; Thorp, John R; Barrett, Richard; Clay, Robert; De Supinski, Bronis; Dube, Evi; Heroux, Mike; Janssen, Curtis; Langer, Steve; Laros, Jim

      2011-01-14

      A programming model is a set of software technologies that support the expression of algorithms and provide applications with an abstract representation of the capabilities of the underlying hardware architecture. The primary goals are productivity, portability and performance.

    5. Deconvolution Program

      Energy Science and Technology Software Center (OSTI)

      1999-02-18

      The program is suitable for a lot of applications in applied mathematics, experimental physics, signal analytical system and some engineering applications range i.e. deconvolution spectrum, signal analysis and system property analysis etc.

    6. Science Programs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The focal point for basic and applied R&D programs with a primary focus on energy but also encompassing medical, biotechnology, high-energy physics, and advanced scientific ...

    7. Integrated Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Program Review (IPR) Quarterly Business Review (QBR) Access to Capital Debt Management July 2013 Aug. 2013 Sept. 2013 Oct. 2013 Nov. 2013 Dec. 2013 Jan. 2014 Feb. 2014 March...

    8. Program Overview

      Broader source: Energy.gov [DOE]

      The culture of the DOE community will be based on standards. Technical standards will formally integrate part of all DOE facility, program and project activities. The DOE will be recognized as a...

    9. Volunteer Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Volunteer Program Volunteer Program Our good neighbor pledge includes active employee engagement in our communities through volunteering. More than 3,000 current and retired Lab employees have logged more than 1.8 million volunteer hours since 2007. August 19, 2015 LANL employee volunteers with Mountain Canine Corps Lab employee Debbi Miller volunteers for the Mountain Canine Corps with her search and rescue dogs. She also volunteers with another search and rescue organization: the Los Alamos

    10. Program Leadership

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Program Leadership - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    11. Educational Programs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Educational Programs Educational Programs A collaboration between Los Alamos National Laboratory and the University of California at San Diego (UCSD) Jacobs School of Engineering Contact Institute Director Charles Farrar (505) 663-5330 Email UCSD EI Director Michael Todd (858) 534-5951 Professional Staff Assistant Ellie Vigil (505) 667-2818 Email Administrative Assistant Rebecca Duran (505) 665-8899 Email There are two educational components to the Engineering Institute. The Los Alamos Dynamic

    12. Special Programs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Programs - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear Energy

    13. GEO3D - Three-Dimensional Computer Model of a Ground Source Heat Pump System

      DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

      James Menart

      2013-06-07

      This file is the setup file for the computer program GEO3D. GEO3D is a computer program written by Jim Menart to simulate vertical wells in conjunction with a heat pump for ground source heat pump (GSHP) systems. This is a very detailed three-dimensional computer model. This program produces detailed heat transfer and temperature field information for a vertical GSHP system.

    14. Sandia Energy - High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    15. NERSC HPC Program Requirements Review Reports

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Published Reports NERSC HPC Program Requirements Review Reports These publications comprise the final reports from the HPC requirements reviews presented to the Department of Energy. Downloads ASCR2017Final.pdf | Adobe Acrobat PDF file Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research - Target 2017 NerscBES2017ReqRevFinal.pdf | Adobe Acrobat PDF file Large Scale Computing and Storage Requirements for Basic Energy Sciences - Target 2017

    16. Parallel programming with Ada

      SciTech Connect (OSTI)

      Kok, J.

      1988-01-01

      To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.

    17. Integrated Computational Materials Engineering (ICME) for Mg: International

      Broader source: Energy.gov (indexed) [DOE]

      Pilot Project | Department of Energy 1 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation PDF icon lm012_li_2011_o.pdf More Documents & Publications Integrated Computational Materials Engineering (ICME) for Mg: International Pilot Project Integrated Computational Materials Engineering (ICME) for Mg: International Pilot Project (Part 1) Vehicle Technologies Office Merit Review 2015: Magnesium-Intensive Front End

    18. Computationally Efficient Modeling of High-Efficiency Clean Combustion

      Broader source: Energy.gov (indexed) [DOE]

      Engines | Department of Energy 2 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting PDF icon ace012_flowers_2012_o.pdf More Documents & Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines Simulation of High Efficiency Clean Combustion Engines and Detailed Chemical Kinetic Mechanisms Development

    19. Computer-Aided Engineering for Electric Drive Vehicle Batteries (CAEBAT) |

      Broader source: Energy.gov (indexed) [DOE]

      Department of Energy 1 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation PDF icon es099_pesaran_2011_p.pdf More Documents & Publications Overview of Computer-Aided Engineering of Batteries (CAEBAT) and Introduction to Multi-Scale, Multi-Dimensional (MSMD) Modeling of Lithium-Ion Batteries Battery Thermal Modeling and Testing Progress of Computer-Aided Engineering of Batteries (CAEBAT)

    20. Computational Modeling & Simulation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      National Solar Thermal Test Facility Nuclear ... Climate & Earth Systems Climate Measurement & Modeling ... Tribal Energy Program Intellectual Property Current EC ...

    1. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect (OSTI)

      GLIMM,J.

      2002-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    2. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect (OSTI)

      GLIMM,J.

      2001-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    3. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect (OSTI)

      GLIMM,J.

      2003-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    4. High performance computing and communications: FY 1996 implementation plan

      SciTech Connect (OSTI)

      1995-05-16

      The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

    5. Plasma Simulation Program

      SciTech Connect (OSTI)

      Greenwald, Martin

      2011-10-04

      Many others in the fusion energy and advanced scientific computing communities participated in the development of this plan. The core planning team is grateful for their important contributions. This summary is meant as a quick overview the Fusion Simulation Program's (FSP's) purpose and intentions. There are several additional documents referenced within this one and all are supplemental or flow down from this Program Plan. The overall science goal of the DOE Office of Fusion Energy Sciences (FES) Fusion Simulation Program (FSP) is to develop predictive simulation capability for magnetically confined fusion plasmas at an unprecedented level of integration and fidelity. This will directly support and enable effective U.S. participation in International Thermonuclear Experimental Reactor (ITER) research and the overall mission of delivering practical fusion energy. The FSP will address a rich set of scientific issues together with experimental programs, producing validated integrated physics results. This is very well aligned with the mission of the ITER Organization to coordinate with its members the integrated modeling and control of fusion plasmas, including benchmarking and validation activities. [1]. Initial FSP research will focus on two critical Integrated Science Application (ISA) areas: ISA1, the plasma edge; and ISA2, whole device modeling (WDM) including disruption avoidance. The first of these problems involves the narrow plasma boundary layer and its complex interactions with the plasma core and the surrounding material wall. The second requires development of a computationally tractable, but comprehensive model that describes all equilibrium and dynamic processes at a sufficient level of detail to provide useful prediction of the temporal evolution of fusion plasma experiments. The initial driver for the whole device model will be prediction and avoidance of discharge-terminating disruptions, especially at high performance, which are a critical impediment to successful operation of machines like ITER. If disruptions prove unable to be avoided, their associated dynamics and effects will be addressed in the next phase of the FSP.

    6. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nucleosynthesis (Technical Report) | SciTech Connect Computational Astrophysics Consortium 3 - Supernovae, Gamma-Ray Bursts and Nucleosynthesis Citation Details In-Document Search Title: Computational Astrophysics Consortium 3 - Supernovae, Gamma-Ray Bursts and Nucleosynthesis Final project report for UCSC's participation in the Computational Astrophysics Consortium - Supernovae, Gamma-Ray Bursts and Nucleosynthesis. As an appendix, the report of the entire Consortium is also appended.

    7. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or modification. Among NERSC's security goal are: 1. To protect NERSC systems from unauthorized access. 2. To prevent the interruption of services to its users. 3. To prevent misuse or abuse of NERSC resources. Security Incidents If you think there has been a computer security incident you should contact NERSC Security as soon as

    8. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    9. Student Internship Programs Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      for a summer high school student to 75,000 for a Ph.D. student working full-time for a year. Program Coordinator: Scott Robbins Email: srobbins@lanl.gov Phone number: 663-5621...

    10. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and ...

    11. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... for use in Advanced Strategic Computing codes Theory and modeling of dense plasmas in ICF and astrophysics environments Theory and modeling of astrophysics in support of NASA ...

    12. Personal Computer Inventory System

      Energy Science and Technology Software Center (OSTI)

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    13. Development of Computer-Aided Design Tools for Automotive Batteries |

      Broader source: Energy.gov (indexed) [DOE]

      Department of Energy 8_hartridge_2012_o.pdf More Documents & Publications Progress of Computer-Aided Engineering of Batteries (CAEBAT) Vehicle Technologies Office Merit Review 2014: Development of Computer-Aided Design Tools for Automotive Batteries Review of A123s HEV and PHEV USABC Programs

    14. Vehicle Technologies Office Merit Review 2013: Accelerating Predictive Simulation of IC Engines with High Performance Computing

      Broader source: Energy.gov [DOE]

      Presentation given by Oak Ridge National Laboratory at the 2013 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting about simulating internal combustion engines using high performance computing.

    15. Computer-Aided Engineering of Batteries for Designing Better Li-Ion Batteries (Presentation)

      SciTech Connect (OSTI)

      Pesaran, A.; Kim, G. H.; Smith, K.; Lee, K. J.; Santhanagopalan, S.

      2012-02-01

      This presentation describes the current status of the DOE's Energy Storage R and D program, including modeling and design tools and the Computer-Aided Engineering for Automotive Batteries (CAEBAT) program.

    16. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Subcommittee

      Office of Scientific and Technical Information (OSTI)

      Report on Scientific and Technical Information (Program Document) | SciTech Connect Computing Advisory Committee (ASCAC) Subcommittee Report on Scientific and Technical Information Citation Details In-Document Search Title: DOE Advanced Scientific Computing Advisory Committee (ASCAC) Subcommittee Report on Scientific and Technical Information The Advanced Scientific Computing Advisory Committee (ASCAC) was charged to form a standing subcommittee to review the Department of Energy's Office of

    17. Large Scale Production Computing and Storage Requirements for Nuclear

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Physics: Target 2017 Large Scale Production Computing and Storage Requirements for Nuclear Physics: Target 2017 NPicon.png This invitation-only review is organized by the Department of Energy's Offices of Nuclear Physics (NP) and Advanced Scientific Computing Research (ASCR) and by NERSC. The goal is to determine production high-performance computing, storage, and services that will be needed for NP to achieve its science goals through 2017. The review brings together DOE Program Managers,

    18. Architectural requirements for the Red Storm computing system. (Technical

      Office of Scientific and Technical Information (OSTI)

      Report) | SciTech Connect Technical Report: Architectural requirements for the Red Storm computing system. Citation Details In-Document Search Title: Architectural requirements for the Red Storm computing system. This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This

    19. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      60 Years of Computing 60 Years of Computing

    20. 2011 Computation Directorate Annual Report

      SciTech Connect (OSTI)

      Crawford, D L

      2012-04-11

      From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilities and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry, the inaccessibility of software to run the powerful systems, and the years it takes to grow the expertise to develop codes and run them in an optimal way. LLNL is helping industry better compete in the global market place by providing access to some of the world's most powerful computing systems, the tools to run them, and the experts who are adept at using them. Our scientists are collaborating side by side with industrial partners to develop solutions to some of industry's toughest problems. The goal of the Livermore Valley Open Campus High Performance Computing Innovation Center is to allow American industry the opportunity to harness the power of supercomputing by leveraging the scientific and computational expertise at LLNL in order to gain a competitive advantage in the global economy.

    1. Postdoctoral Program Program Description The Postdoctoral (Postdoc...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Postdoctoral Program Program Description The Postdoctoral (Postdoc) Research program offers the opportunity for appointees to perform research in a robust scientific R&D...

    2. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math science-innovationassetsimagesicon-science.jpg Information Science, Computing, Applied Math National security depends on science ...

    3. The Macolumn - the Mac gets geophysical. [A review of geophysical software for the Apple Macintosh computer

      SciTech Connect (OSTI)

      Busbey, A.B. )

      1990-02-01

      Seismic Processing Workshop, a program by Parallel Geosciences of Austin, TX, is discussed in this column. The program is a high-speed, interactive seismic processing and computer analysis system for the Apple Macintosh II family of computers. Also reviewed in this column are three products from Wilkerson Associates of Champaign, IL. SubSide is an interactive program for basin subsidence analysis; MacFault and MacThrustRamp are programs for modeling faults.

    4. ELECTRONIC DIGITAL COMPUTER

      DOE Patents [OSTI]

      Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.

      1957-10-01

      The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.

    5. Theory, Simulation, and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ADTSC Theory, Simulation, and Computation Supporting the Laboratory's overarching strategy to provide cutting-edge tools to guide and interpret experiments and further our fundamental understanding and predictive capabilities for complex systems. Theory, modeling, informatics Suites of experiment data High performance computing, simulation, visualization Contacts Associate Director John Sarrao Deputy Associate Director Paul Dotson Directorate Office (505) 667-6645 Email Applying the Scientific

    6. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    7. Graduate Student Fellowship Program | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Graduate Student Fellowship Program DOE Office of Science Graduate Fellowship Program The Department of Energy (DOE) Office of Science (SC) has established the DOE Office of Science Graduate Fellowship ( DOE SCGF) program to support outstanding students to pursue graduate training in basic research in areas of physics, biology, chemistry, mathematics, engineering, computational sciences, and environmental sciences relevant to the Office of Science and to encourage the development of the next

    8. Indirection and computer security.

      SciTech Connect (OSTI)

      Berg, Michael J.

      2011-09-01

      The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyze common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions.

    9. Overview of the Defense Programs Research and Technology Development Program for Fiscal Year 1993

      SciTech Connect (OSTI)

      Not Available

      1993-09-30

      This documents presents a programmatic overview and program element plan summaries for conceptual design and assessment; physics; computation and modeling; system engineering science and technology; electronics, photonics, sensors, and mechanical components; chemistry and materials; special nuclear materials, tritium, and explosives.

    10. Advanced Scientific Computing Research Network Requirements

      SciTech Connect (OSTI)

      Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

      2013-03-08

      The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

    11. Characteristics of Strong Programs

      Broader source: Energy.gov [DOE]

      Existing financing programs offer a number of important lessons on effective program design. Some characteristics of strong financing programs drawn from past program experience are described below.

    12. Certification of computer professionals: A good idea?

      SciTech Connect (OSTI)

      Boggess, G.

      1994-12-31

      In the early stages of computing there was little understanding or attention paid to the ethical responsibilities of professionals. Compainies routinely put secretaries and music majors through 30 hours of video training and turned them loose on data processing projects. As the nature of the computing task changed, these same practices were followed and the trainees were set loose on life-critical software development projects. The enormous risks of using programmers with limited training has been by the GAO report on the BSY-2 program.

    13. Cheaper Adjoints by Reversing Address Computations

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Hascoët, L.; Utke, J.; Naumann, U.

      2008-01-01

      The reverse mode of automatic differentiation is widely used in science and engineering. A severe bottleneck for the performance of the reverse mode, however, is the necessity to recover certain intermediate values of the program in reverse order. Among these values are computed addresses, which traditionally are recovered through forward recomputation and storage in memory. We propose an alternative approach for recovery that uses inverse computation based on dependency information. Address storage constitutes a significant portion of the overall storage requirements. An example illustrates substantial gains that the proposed approach yields, and we show use cases in practical applications.

    14. START Program

      Broader source: Energy.gov [DOE]

      The Strategic Technical Assistance Response Team (START) Program is part of the DOE Office of Indian Energy effort to assist in the development of tribal renewable energy projects. Through START, Tribes in the 48 contiguous states and Alaska can apply for and are selected to receive technical assistance from DOE and national laboratory experts to move projects closer to implementation.

    15. Reconnection methods for an arbitrary polyhedral computational grid

      SciTech Connect (OSTI)

      Rasskazova, V.V.; Sofronov, I.D.; Shaporenko, A.N.; Burton, D.E.; Miller, D.S.

      1996-08-01

      The paper suggests a method for local reconstructions of a 3D irregular computational grid and the algorithm of its program implementation. Two grid reconstruction operations are used as basic: paste of two cells having a common face and cut of a certain cell into two by a given plane. This paper presents criteria to use one or another operation, the criteria are analyzed. A program for local reconstruction of a 3D irregular grid is used to conduct two test computations and the computed results are given.

    16. Program Year 2008 State Energy Program Formula

      Broader source: Energy.gov [DOE]

      U.S. Department of Energy (DOE) State Energy Program (SEP), SEP Program Guidance Fiscal Year 2008, Program Year 2008, energy efficiency and renewable energy programs in the states, DOE Office of Energy Efficiency and Renewable Energy

    17. Parallel programming with PCN. Revision 1

      SciTech Connect (OSTI)

      Foster, I.; Tuecke, S.

      1991-12-01

      PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

    18. Sandia National Laboratories: Advanced Simulation and Computing: Contact

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ASC Contact ASC Sandia ASC Program Contacts Program Director Bruce Hendrickson bahendr@sandia.gov Program Manager David Womble dewombl@sandia.gov Integrated Codes Lead Scott Hutchinson sahutch@sandia.gov Physics & Engineering Modeling Lead Jim Redmond jmredmo@sandia.gov Verification & Validation Lead Curt Nilsen canilse@sandia.gov Computational Systems & Software Engineering Lead Ken Alvin kfalvin@sandia.gov Facilities Operations & User Support Lead Tom Klitsner

    19. Cloud Computing Services

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel ... SubTER Carbon Sequestration Program Leadership EnergyWater Nexus EnergyWater History ...

    20. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Researchers gain deep mechanistic view of a critical ATP-driven calcium pump. Read More Visualization of primate tooth ALCF's new data science program targets "big data" problems ...

    1. Foundational Tools for Petascale Computing

      SciTech Connect (OSTI)

      Miller, Barton

      2014-05-19

      The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “High-Performance Energy Applications and Systems”, SC0004061/FG02-10ER25972, UW PRJ36WV.

    2. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    3. Computation Directorate 2007 Annual Report

      SciTech Connect (OSTI)

      Henson, V E; Guse, J A

      2008-03-06

      If there is a single word that both characterized 2007 and dominated the thoughts and actions of many Laboratory employees throughout the year, it is transition. Transition refers to the major shift that took place on October 1, when the University of California relinquished management responsibility for Lawrence Livermore National Laboratory (LLNL), and Lawrence Livermore National Security, LLC (LLNS), became the new Laboratory management contractor for the Department of Energy's (DOE's) National Nuclear Security Administration (NNSA). In the 55 years under the University of California, LLNL amassed an extraordinary record of significant accomplishments, clever inventions, and momentous contributions in the service of protecting the nation. This legacy provides the new organization with a built-in history, a tradition of excellence, and a solid set of core competencies from which to build the future. I am proud to note that in the nearly seven years I have had the privilege of leading the Computation Directorate, our talented and dedicated staff has made far-reaching contributions to the legacy and tradition we passed on to LLNS. Our place among the world's leaders in high-performance computing, algorithmic research and development, applications, and information technology (IT) services and support is solid. I am especially gratified to report that through all the transition turmoil, and it has been considerable, the Computation Directorate continues to produce remarkable achievements. Our most important asset--the talented, skilled, and creative people who work in Computation--has continued a long-standing Laboratory tradition of delivering cutting-edge science even in the face of adversity. The scope of those achievements is breathtaking, and in 2007, our accomplishments span an amazing range of topics. From making an important contribution to a Nobel Prize-winning effort to creating tools that can detect malicious codes embedded in commercial software; from expanding BlueGene/L, the world's most powerful computer, by 60% and using it to capture the most prestigious prize in the field of computing, to helping create an automated control system for the National Ignition Facility (NIF) that monitors and adjusts more than 60,000 control and diagnostic points; from creating a microarray probe that rapidly detects virulent high-threat organisms, natural or bioterrorist in origin, to replacing large numbers of physical computer servers with small numbers of virtual servers, reducing operating expense by 60%, the people in Computation have been at the center of weighty projects whose impacts are felt across the Laboratory and the DOE community. The accomplishments I just mentioned, and another two dozen or so, make up the stories contained in this report. While they form an exceptionally diverse set of projects and topics, it is what they have in common that excites me. They share the characteristic of being central, often crucial, to the mission-driven business of the Laboratory. Computational science has become fundamental to nearly every aspect of the Laboratory's approach to science and even to the conduct of administration. It is difficult to consider how we would proceed without computing, which occurs at all scales, from handheld and desktop computing to the systems controlling the instruments and mechanisms in the laboratories to the massively parallel supercomputers. The reasons for the dramatic increase in the importance of computing are manifest. Practical, fiscal, or political realities make the traditional approach to science, the cycle of theoretical analysis leading to experimental testing, leading to adjustment of theory, and so on, impossible, impractical, or forbidden. How, for example, can we understand the intricate relationship between human activity and weather and climate? We cannot test our hypotheses by experiment, which would require controlled use of the entire earth over centuries. It is only through extremely intricate, detailed computational simulation that we can test our theories, and simulating weather and climate over the entire globe requires the most massive high-performance computers that exist. Such extreme problems are found in numerous laboratory missions, including astrophysics, weapons programs, materials science, and earth science.

    4. Eight Projects Selected for NERSC's Data Intensive Computing Pilot Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SciTech Connect Efforts to Consolidate Chalcogels with Adsorbed Iodine Citation Details In-Document Search Title: Efforts to Consolidate Chalcogels with Adsorbed Iodine This document discusses ongoing work with non-oxide aerogels, called chalcogels, that are under development at the Pacific Northwest National Laboratory as sorbents for gaseous iodine. Work was conducted in fiscal year 2012 to demonstrate the feasibility of converting Sn2S3 chalcogel without iodine into a glass. This current

    5. DOE Office of Science Computing Facility Operational Assessment Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Energy Office of Environmental Management 2015 Year in Review DOE Office of Environmental Management 2015 Year in Review December 23, 2015 - 10:00am Addthis DOE Office of Environmental Management 2015 Year in Review Version Available for Download "I am proud of all of the work we in EM-both at headquarters and in the field-have accomplished this year. While facing the most complex cleanup challenges, measurable progress was made in 2015-a testament to our skilled workforce. The

    6. An Information Dependant Computer Program for Engine Exhaust...

      Broader source: Energy.gov (indexed) [DOE]

      use exhaust waste heat from individual diesel power plants. PDF icon deer09avadhanula.pdf More Documents & Publications Modular Low Cost High Energy Exhaust Heat Thermoelectric ...

    7. Environmental Programs

      Office of Environmental Management (EM)

      Alamos National Laboratory | Environmental Programs Material Disposal Areas and the Threat of Wildfire Fact Sheet Established in 1943, Los Alamos National Laboratory now consists of 1,280 buildings in 47 technical areas spread out over 37 square miles. The complex includes 11 nuclear facilities and more than 10,000 workers. Los Alamos and Wildfires In the past, large wildfires in the area, including the La Mesa Fire (1977), the Dome Fire (1996), the Oso Fire (1998), the Cerro Grande Fire (2000)

    8. Program Development

      SciTech Connect (OSTI)

      Atencio, Julian J.

      2014-05-01

      This presentation covers how to go about developing a human reliability program. In particular, it touches on conceptual thinking, raising awareness in an organization, the actions that go into developing a plan. It emphasizes evaluating all positions, eliminating positions from the pool due to mitigating factors, and keeping the process transparent. It lists components of the process and objectives in process development. It also touches on the role of leadership and the necessity for audit.

    9. On Undecidability Aspects of Resilient Computations and Implications to Exascale

      SciTech Connect (OSTI)

      Rao, Nageswara S

      2014-01-01

      Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classes of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.

    10. Present and Future Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Important for DOE Energy Frontier Mission 2 * TH HEP is new ... & PDSF (studies based on usage for end of Sep 2012 - Nov ... framework (Sherpa), and a library for the computation of ...

    11. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      a n n u a l r e p o r t 2 0 1 2 Argonne Leadership Computing Facility Director's Message .............................................................................................................................1 About ALCF ......................................................................................................................................... 2 IntroDuCIng MIrA Introducing Mira

    12. Cloud computing security.

      SciTech Connect (OSTI)

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    13. Quantum steady computation

      SciTech Connect (OSTI)

      Castagnoli, G. )

      1991-08-10

      This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

    14. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Argonne National Laboratory | 9700 South Cass Avenue | Argonne, IL 60439 | www.anl.gov | September 2013 alcf_keyfacts_fs_0913 Key facts about the Argonne Leadership Computing Facility User support and services Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. Catalysts are computational scientist with domain expertise and work directly with project principal investigators to maximize discovery and reduce time-to- solution.

    15. New TRACC Cluster Computer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      TRACC Cluster Computer With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD 16 core, 2.3 GHz, 32 GB processors. See also Computing Resources.

    16. Computational Modeling | Bioenergy | NREL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Modeling NREL uses computational modeling to increase the efficiency of biomass conversion by rational design using multiscale modeling, applying theoretical approaches, and testing scientific hypotheses. model of enzymes wrapping on cellulose; colorful circular structures entwined through blue strands Cellulosomes are complexes of protein scaffolds and enzymes that are highly effective in decomposing biomass. This is a snapshot of a coarse-grain model of complex cellulosome

    17. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    18. Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computing Computing Fun fact: Most systems require air conditioning or chilled water to cool super powerful supercomputers, but the Olympus supercomputer at Pacific Northwest National Laboratory is cooled by the location's 65 degree groundwater. Traditional cooling systems could cost up to $61,000 in electricity each year, but this more efficient setup uses 70 percent less energy. | Photo courtesy of PNNL. Fun fact: Most systems require air conditioning or chilled water to cool super powerful

    19. Compute Reservation Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Reservation Request Form Compute Reservation Request Form Users can request a scheduled reservation of machine resources if their jobs have special needs that cannot be accommodated through the regular batch system. A reservation brings some portion of the machine to a specific user or project for an agreed upon duration. Typically this is used for interactive debugging at scale or real time processing linked to some experiment or event. It is not intended to be used to guarantee fast

    20. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These colors were

    1. AutoPIPE Extract Program

      Energy Science and Technology Software Center (OSTI)

      1993-07-02

      The AutoPIPE Extract Program (APEX) provides an interface between CADAM (Computer Aided Design and Manufacturing) Release 21 drafting software and the AutoPIPE, Version 4.4, piping analysis program. APEX produces the AutoPIPE batch input file that corresponds to the piping shown in a CADAM model. The card image file contains header cards, material cards, and pipe cross section cards as well as tee, bend, valve, and flange cards. Node numbers are automatically generated. APEX processes straightmore » pipe, branch lines and ring geometries.« less

    2. STEP Program Benchmark Report

      Broader source: Energy.gov [DOE]

      STEP Program Benchmark Report, from the Tool Kit Framework: Small Town University Energy Program (STEP).

    3. Residential Buildings Integration Program

      Broader source: Energy.gov [DOE]

      Residential Buildings Integration Program Presentation for the 2013 Building Technologies Office's Program Peer Review

    4. Powering Research | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Featured Science The form factor for the decay of a kaon into a pion and two leptons Lattice QCD Paul Mackenzie Allocation Program: INCITE Allocation Hours: 180 Million Breakthrough Science At the ALCF, we provide researchers from industry, academia, and government agencies with access to leadership-class supercomputing capabilities and a team of expert computational scientists.

    5. Compiling & Linking | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System Overview Data Storage & File Systems Compiling & Linking Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource.

    6. computing | National Nuclear Security Administration

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computing NNSA Announces Procurement of Penguin Computing Clusters to Support Stockpile Stewardship at National Labs The National Nuclear Security Administration's (NNSA's) Lawrence Livermore National Laboratory today announced the awarding of a subcontract to Penguin Computing - a leading developer of high-performance Linux cluster computing systems based in Silicon Valley - to bolster computing for stockpile

    7. Process for selecting NEAMS applications for access to Idaho National Laboratory high performance computing resources

      SciTech Connect (OSTI)

      Michael Pernice

      2010-09-01

      INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

    8. Can Cloud Computing Address the Scientific Computing Requirements for DOE

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe January 30, 2012 Jon Bashor, Jbashor@lbl.gov, +1 510-486-5849 Magellan1.jpg Magellan at NERSC After a two-year study of the feasibility of cloud computing systems for meeting the ever-increasing computational needs of scientists,

    9. Managing turbine-generator outages by computer

      SciTech Connect (OSTI)

      Reinhart, E.R. [Reinhart and Associates, Inc., Austin, TX (United States)

      1997-09-01

      This article describes software being developed to address the need for computerized planning and documentation programs that can help manage outages. Downsized power-utility companies and the growing demand for independent, competitive engineering and maintenance services have created a need for a computer-assisted planning and technical-direction program for turbine-generator outages. To meet this need, a software tool is now under development that can run on a desktop or laptop personal computer to assist utility personnel and technical directors in outage planning. Total Outage Planning Software (TOPS), which runs on Windows, takes advantage of the mass data storage available with compact-disc technology by archiving the complete outage documentation on CD. Previous outage records can then be indexed, searched, and viewed on a computer with the click of a mouse. Critical-path schedules, parts lists, parts order tracking, work instructions and procedures, custom data sheets, and progress reports can be generated by computer on-site during an outage.

    10. DHC: a diurnal heat capacity program for microcomputers

      SciTech Connect (OSTI)

      Balcomb, J.D.

      1985-01-01

      A computer program has been developed that can predict the temperature swing in direct gain passive solar buildings. The diurnal heat capacity (DHC) program calculates the DHC for any combination of homogeneous or layered surfaces using closed-form harmonic solutions to the heat diffusion equation. The theory is described, a Basic program listing is provided, and an example solution printout is given.

    11. in High Performance Computing Computer System, Cluster, and Networking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

    12. Intergovernmental Programs | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs The Office of Environmental Management supports, by means of grants and cooperative agreements, a number of

    13. Program Update

      Energy Savers [EERE]

      5 issue of the U.S. Department of Energy (DOE) Offce of Legacy Management (LM) Program Update. This publication is designed to provide a status of activities within LM. Please direct all comments and inquiries to lm@hq.doe.gov. January-March 2015 Visit us at http://energy.gov/lm/ Goal 4 Successful Transition from Mound Site to Mound Business Park Continues The Mound Business Park attracts a variety of businesses to the former U.S. Department of Energy (DOE) Mound, Ohio, Site in Miamisburg. In

    14. Programming Models

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Version Control Tools Programming Libraries Performance and Debugging Tools Grid Software and Services NERSC Software Downloads Policies User Surveys NERSC Users Group User Announcements Help Staff Blogs Request Repository Mailing List Operations for: Passwords & Off-Hours Status 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or 510-486-8612 Consulting http://help.nersc.gov consult@nersc.gov 1-800-66-NERSC, option 3

    15. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    16. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math /science-innovation/_assets/images/icon-science.jpg Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los Alamos National Laboratory for the best of both. No place on Earth pursues a broader array of world-class scientific endeavors. Computer, Computational, and Statistical Sciences (CCS)» High Performance Computing (HPC)» Extreme Scale Computing, Co-design» supercomputing

    17. Careers | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      At the Argonne Leadership Computing Facility, we are helping to redefine what's possible in computational science. With some of the most powerful supercomputers in the world and a ...

    18. Computer simulation | Open Energy Information

      Open Energy Info (EERE)

      Computer simulation Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Computer simulation Author wikipedia Published wikipedia, 2013 DOI Not Provided...

    19. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse ...

    20. NREL: Computational Science Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      high-performance computing, computational science, applied mathematics, scientific data management, visualization, and informatics. NREL is home to the largest high performance...

    1. computers | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      Sandia donates 242 computers to northern California schools Sandia National Laboratories electronics technologist Mitch Williams prepares the disassembly of 242 computers for ...

    2. Human-computer interface

      DOE Patents [OSTI]

      Anderson, Thomas G.

      2004-12-21

      The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

    3. Geothermal Technologies Program Overview - Peer Review Program

      SciTech Connect (OSTI)

      Milliken, JoAnn

      2011-06-06

      This Geothermal Technologies Program presentation was delivered on June 6, 2011 at a Program Peer Review meeting. It contains annual budget, Recovery Act, funding opportunities, upcoming program activities, and more.

    4. Argonne's Laboratory computing resource center : 2006 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

      2007-05-31

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    5. Undergraduate Student Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Undergraduate Program Undergraduate Student Program The Undergraduate Student (UGS) program is a year-round educational program that provides students with relevant research experience while they are pursuing an undergraduate degree. Contact Program Manager Scott Robbins Student Programs (505) 667-3639 Email Program Coordinator Emily Robinson Student Programs (505) 665-0964 Email Deadline for continuing and returning students: you are required to submit updated transcripts to the program office

    6. Machinist Pipeline/Apprentice Program Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      cost effective than previous time-based programs Moves apprentices to journeyworker status more quickly Program Coordinator: Heidi Hahn Email: hahn@lanl.gov Phone number:...

    7. Existing Facilities Rebate Program

      Broader source: Energy.gov [DOE]

      The NYSERDA Existing Facilities program merges the former Peak Load Reduction and Enhanced Commercial and Industrial Performance programs. The new program offers a broad array of different...

    8. Graduate Research Assistant Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      in intellectual vitality and opportunities for growth. Contact Program Manager Scott Robbins Student Programs (505) 667-3639 Email Program Coordinator Emily Robinson Student...

    9. Commercial Buildings Integration Program

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Buildings Integration Program Arah Schuur Program Manager arah.schuur@ee.doe.gov April 2, ... Commercial Buildings Integration Program Mission Accelerate voluntary uptake of ...

    10. DOE's Tribal Energy Program

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      ... Tribal Energy Program Program Funding History Tribal Energy Program Funding* FY2009 ... FY2011 Financial Assistance First Steps Toward Developing Energy Efficiency and ...

    11. Building Technologies Program Presentation

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Renewable Energy Building Technologies Program Jerry Dion Acting Program Manager Building Technologies Program State Energy Advisory Board Meeting October 17, 2007 The investment ...

    12. Information hiding in parallel programs

      SciTech Connect (OSTI)

      Foster, I.

      1992-01-30

      A fundamental principle in program design is to isolate difficult or changeable design decisions. Application of this principle to parallel programs requires identification of decisions that are difficult or subject to change, and the development of techniques for hiding these decisions. We experiment with three complex applications, and identify mapping, communication, and scheduling as areas in which decisions are particularly problematic. We develop computational abstractions that hide such decisions, and show that these abstractions can be used to develop elegant solutions to programming problems. In particular, they allow us to encode common structures, such as transforms, reductions, and meshes, as software cells and templates that can reused in different applications. An important characteristic of these structures is that they do not incorporate mapping, communication, or scheduling decisions: these aspects of the design are specified separately, when composing existing structures to form applications. This separation of concerns allows the same cells and templates to be reused in different contexts.

    13. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2015-01-27

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    14. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2014-12-30

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    15. Barbara Helland Advanced Scientific Computing Research NERSC-HEP Requirements Review

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7-28, 2012 Barbara Helland Advanced Scientific Computing Research NERSC-HEP Requirements Review 1 Science C ase S tudies d rive d iscussions Program R equirements R eviews  Program offices evaluated every two-three years  Participants include program managers, PI/ Scientists, ESnet/NERSC staff and management  User-driven discussion of science opportunities and needs  What: Instruments and facilities, data scale, computational requirements  How: science process, data analysis,

    16. MHD computations for stellarators

      SciTech Connect (OSTI)

      Johnson, J.L.

      1985-12-01

      Considerable progress has been made in the development of computational techniques for studying the magnetohydrodynamic equilibrium and stability properties of three-dimensional configurations. Several different approaches have evolved to the point where comparison of results determined with different techniques shows good agreement. 55 refs., 7 figs.

    17. Computer Security Risk Assessment

      Energy Science and Technology Software Center (OSTI)

      1992-02-11

      LAVA/CS (LAVA for Computer Security) is an application of the Los Alamos Vulnerability Assessment (LAVA) methodology specific to computer and information security. The software serves as a generic tool for identifying vulnerabilities in computer and information security safeguards systems. Although it does not perform a full risk assessment, the results from its analysis may provide valuable insights into security problems. LAVA/CS assumes that the system is exposed to both natural and environmental hazards and tomore » deliberate malevolent actions by either insiders or outsiders. The user in the process of answering the LAVA/CS questionnaire identifies missing safeguards in 34 areas ranging from password management to personnel security and internal audit practices. Specific safeguards protecting a generic set of assets (or targets) from a generic set of threats (or adversaries) are considered. There are four generic assets: the facility, the organization''s environment; the hardware, all computer-related hardware; the software, the information in machine-readable form stored both on-line or on transportable media; and the documents and displays, the information in human-readable form stored as hard-copy materials (manuals, reports, listings in full-size or microform), film, and screen displays. Two generic threats are considered: natural and environmental hazards, storms, fires, power abnormalities, water and accidental maintenance damage; and on-site human threats, both intentional and accidental acts attributable to a perpetrator on the facility''s premises.« less

    18. Vehicle Technologies Office Merit Review 2014: Integrated Computational Materials Engineering Approach to Development of Lightweight 3GAHSS Vehicle Assembly

      Broader source: Energy.gov [DOE]

      Presentation given by USAMP at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about integrated computational materials...

    19. Vehicle Technologies Office Merit Review 2015: Integrated Computational Materials Engineering Approach to Development of Lightweight 3GAHSS Vehicle Assembly

      Broader source: Energy.gov [DOE]

      Presentation given by USAMP at 2015 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about integrated computational materials...

    20. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

      DOE Patents [OSTI]

      Gschwind, Michael K

      2013-04-16

      Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

    1. HeNCE: A Heterogeneous Network Computing Environment

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; Manchek, Robert; Moore, Keith

      1994-01-01

      Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less

    2. Better Buildings Neighborhood Program Business Models Guide: Program Administrator Description

      Broader source: Energy.gov [DOE]

      Better Buildings Neighborhood Program Business Models Guide: Program Administrator Business Models, Program Administrator Description.

    3. Sandia National Laboratories: Careers: Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced software research & development Collaborative technologies Computational science and mathematics High-performance computing Visualization and scientific computing Advanced ...

    4. Multicore Challenges and Benefits for High Performance Scientific Computing

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Nielsen, Ida M.B.; Janssen, Curtis L.

      2008-01-01

      Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less

    5. Executing a gather operation on a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Ratterman, Joseph D.

      2012-03-20

      Methods, apparatus, and computer program products are disclosed for executing a gather operation on a parallel computer according to embodiments of the present invention. Embodiments include configuring, by the logical root, a result buffer or the logical root, the result buffer having positions, each position corresponding to a ranked node in the operational group and for storing contribution data gathered from that ranked node. Embodiments also include repeatedly for each position in the result buffer: determining, by each compute node of an operational group, whether the current position in the result buffer corresponds with the rank of the compute node, if the current position in the result buffer corresponds with the rank of the compute node, contributing, by that compute node, the compute node's contribution data, if the current position in the result buffer does not correspond with the rank of the compute node, contributing, by that compute node, a value of zero for the contribution data, and storing, by the logical root in the current position in the result buffer, results of a bitwise OR operation of all the contribution data by all compute nodes of the operational group for the current position, the results received through the global combining network.

    6. Berkeley Lab Joins DOE's New HPC4Manufacturing Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Joins DOE's New HPC4Manufacturing Program Berkeley Lab Joins DOE's New HPC4Manufacturing Program September 15, 2015 Lawrence Berkeley National Laboratory (Berkeley Lab) is collaborating with Lawrence Livermore and Oak Ridge national laboratories on a new Department of Energy (DOE) program designed to fund and foster public-private R&D projects that enhance U.S. competitiveness in clean energy manufacturing. The High Performance Computing for Manufacturing Program (HPC4Mfg), announced this

    7. 2014 Call for NERSC Initiative for Scientific Exploration (NISE) Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Due December 8 the NERSC Initiative for Scientific Exploration (NISE) program 2014 Call for NERSC Initiative for Scientific Exploration (NISE) Program Due December 8 November 18, 2013 by Francesca Verdier Users may now submit requests for the 2014 NERSC Initiative for Scientific Exploration (NISE) program. The deadline to apply is Sunday December 8, 11:59 PM Pacific Time. The goals for this program in 2014 are: HPC and data analysis: Projects that leverage extreme scale parallel computing to

    8. Method and apparatus for collaborative use of application program

      DOE Patents [OSTI]

      Dean, Craig D.

      1994-01-01

      Method and apparatus permitting the collaborative use of a computer application program simultaneously by multiple users at different stations. The method is useful with communication protocols having client/server control structures. The method of the invention requires only a sole executing copy of the application program and a sole executing copy of software comprising the invention. Users may collaboratively use a set of application programs by invoking for each desired application program one copy of software comprising the invention.

    9. Building Life Cycle Cost Programs | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Building Life Cycle Cost Programs Building Life Cycle Cost Programs The National Institute of Standards and Technology (NIST) developed the Building Life Cycle Cost (BLCC) Programs to provide computational support for the analysis of capital investments in buildings. They include BLCC5, the Energy Escalation Rate Calculator, Handbook 135, and the Annual Supplement to Handbook 135. BLCC5 Program Register and download. BLCC 5.3-15 (for Windows or Mac OS X). BLCC version 5.3-15 contains the

    10. Human Reliability Program Overview

      SciTech Connect (OSTI)

      Bodin, Michael

      2012-09-25

      This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

    11. Vehicle Technologies Program Overview

      SciTech Connect (OSTI)

      none,

      2006-09-05

      Overview of the Vehicle Technologies Program including external assessment and market view; internal assessment, program history and progress; program justification and federal role; program vision, mission, approach, strategic goals, outputs, and outcomes; and performance goals.

    12. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs ...

    13. Magellan: A Cloud Computing Testbed

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Magellan News & Announcements Archive Petascale Initiative Exascale Computing APEX Home » R & D » Archive » Magellan: A Cloud Computing Testbed Magellan: A Cloud Computing Testbed Cloud computing is gaining a foothold in the business world, but can clouds meet the specialized needs of scientists? That was one of the questions NERSC's Magellan cloud computing testbed explored between 2009 and 2011. The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Oce

    14. Software and High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software and High Performance Computing Software and High Performance Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest Contact thumbnail of Kathleen McDonald Head of Intellectual Property, Business Development Executive Kathleen McDonald Richard P. Feynman Center for Innovation (505) 667-5844 Email Software Computational physics, computer science, applied mathematics, statistics and the

    15. NNSS Groundwater Program Welcomes Peer Review Team

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      April 18, 2014 NNSS Groundwater Program Welcomes Peer Review Team Recently, an independent peer review team was invited to assess the groundwater characterization program at the Nevada National Security Site (NNSS). This nationally recognized group of experts, from various external organizations, will examine the computer modeling approach developed to better understand how historic underground nuclear testing in Yucca Flat affected the groundwater. From April 7th to 11th, 2014, five peer

    16. Certificate in Environmental Monitoring Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Certificate in Environmental Monitoring Program Description Since a primary goal of the Neighborhood Environmental Watch Network (NEWNET) project is to provide information to the public, it is fitting that there are appropriate education programs. NEWNET has collaborated with several local high schools and colleges by providing them with local NEWNET stations. Some teaching curricula include a study of radiation and detection, data acquisition and plotting, meteorology, or uses of computers.

    17. Program Requirements | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      Program Requirements Participants Academies: USAFA, USNA, USMA, USCGA, USMMA cadets/midshipmen NNSA Sites: LANL, LLNL, SNL, NNSS, Pantex, KC Plant, Y-12 Plant, Savannah River NNSA Headquarters: Defense Programs management Eligibility Requirements Student in good standing Secret security clearance with some authorized to CNWDI (RD) desired Major in physics, chemistry, engineering, material science, life science, computer science, social science (political science, psychology and public affairs)

    18. Representation of Limited Rights Data and Restricted Computer Software |

      Energy Savers [EERE]

      Department of Energy Representation of Limited Rights Data and Restricted Computer Software Representation of Limited Rights Data and Restricted Computer Software Any data delivered under an award resulting from this announcement is subject to the Rights in Data - General or the Rights in Data - Programs Covered Under Special Data Statutes clause (See Intellectual Property Provisions). Under these clauses, the Recipient may withhold from delivery data that qualify as limited rights data or

    19. Computational Analysis of the Thermal-Hydraulic Characteristics of the

      Office of Scientific and Technical Information (OSTI)

      Encapsulated Nuclear Heat Source (Journal Article) | SciTech Connect Computational Analysis of the Thermal-Hydraulic Characteristics of the Encapsulated Nuclear Heat Source Citation Details In-Document Search Title: Computational Analysis of the Thermal-Hydraulic Characteristics of the Encapsulated Nuclear Heat Source The encapsulated nuclear heat source (ENHS) is a modular reactor that was selected by the 1999 U.S. Department of Energy Nuclear Energy Research Initiative program as a

    20. Representation of Limited Rights Data and Restricted Computer Software

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      REPRESENTATION OF LIMITED RIGHTS DATA AND RESTRICTED COMPUTER SOFTWARE Applicant: Funding Opportunity Announcement/Solicitation No.: (a) Any data delivered under an award resulting from this announcement is subject to the Rights in Data - General or the Rights in Data - Programs Covered Under Special Data Statutes clause (See Intellectual Property Provisions). Under these clauses, the Recipient may withhold from delivery data that qualify as limited rights data or restricted computer software.

    1. What Are the Computational Keys to Future Scientific Discoveries?

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      What are the Computational Keys to Future Scientific Discoveries? What Are the Computational Keys to Future Scientific Discoveries? NERSC Develops a Data Intensive Pilot Program to Help Scientists Find Out August 23, 2012 Linda Vu,lvu@lbl.gov, +1 510 495 2402 ALS.jpg Advanced Light Source at the Lawrence Berkeley National Laboratory. (Photo by: Roy Kaltschmidt, Berkeley Lab) A new camera at the hard x-ray tomography beamline of Lawrence Berkeley National Laboratory's (Berkeley Lab's) Advanced

    2. Enabling Green Energy and Propulsion Systems via Direct Noise Computation |

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Argonne Leadership Computing Facility High-fidelity simulation of exhaust nozzle under installed configuration Umesh Paliath, GE Global Research; Joe Insley, Argonne National Laboratory Enabling Green Energy and Propulsion Systems via Direct Noise Computation PI Name: Umesh Paliath PI Email: paliath@ge.com Institution: GE Global Research Allocation Program: INCITE Allocation Hours at ALCF: 105 Million Year: 2013 Research Domain: Engineering GE Global Research is using the Argonne Leadership

    3. ASCR Leadership Computing Challenge (ALCC) proposals due February 1, 2013

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (ALCC) proposals due February 1, 2013 ASCR Leadership Computing Challenge (ALCC) proposals due February 1, 2013 January 2, 2013 by Francesca Verdier DOE's ASCR Leadership Computing Challenge (ALCC) program is intended for special situations of interest to the Department's energy mission, with an emphasis on high-risk, high-payoff simulations: Advancing the clean energy agenda. Advancing a robust predictive understanding of the Earth's climate and environmental systems. Responding to natural and

    4. Wisconsin Clean Transportation Program

      Broader source: Energy.gov [DOE]

      2011 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation

    5. Wisconsin Clean Transportation Program

      Broader source: Energy.gov [DOE]

      2012 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting

    6. Residential Buildings Integration Program

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      April 2, 2013 Residential Buildings Integration Program Building Technologies Office ... Overview of the Residential Integration Program Research Implementation tools ...

    7. HQ Mediation Program Brochure

      Broader source: Energy.gov [DOE]

      This document is the HQ Mediation Program's brochure.  It generally discusses the services the program offers.

    8. Utility Partnerships Program Overview

      SciTech Connect (OSTI)

      2014-10-03

      Document describes the Utility Partnerships Program within the U.S. Department of Energy's Federal Energy Management Program.

    9. STEM Education Program Inventory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Issue for STEM Education Program Inventory Title of Program* Requestor Contact Information First Name* Last Name* Phone Number* E-mail* Fax Number Institution Name Program Description* Issue Information Leading Organization* Location of Program / Event Program Address Program Website To select multiple options, press CTRL and click. Type of Program (if Other, enter information in the box to the right.)* Workforce Development Student Programs Public Engagement in Life Long Learning

    10. Mentoring Program | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mentoring Program Mentoring Program Mentee Questionnaire Mentor Questionnaire Ideas for Mentoring Program Activities...

    11. Computing Sciences Staff Help East Bay High Schoolers Upgrade their Summer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Sciences Staff Help East Bay High Schoolers Upgrade their Summer Computing Sciences Staff Help East Bay High Schoolers Upgrade their Summer August 6, 2015 Jon Bashor, jbashor@lbl.gov, +1 510 486 5849 To help prepare students from underrepresented groups learn about careers in a variety of IT fields, the Laney College Computer Information Systems Department offered its Upgrade: Computer Science Program. Thirty-eight students from 10 East Bay high schools registered for the eight-week

    12. Institutional Computing Executive Group Review of Multi-programmatic & Institutional Computing, Fiscal Year 2005 and 2006

      SciTech Connect (OSTI)

      Langer, S; Rotman, D; Schwegler, E; Folta, P; Gee, R; White, D

      2006-12-18

      The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources. Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.

    13. Machinist Pipeline/Apprentice Program Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Machinist Pipeline/Apprentice Program Program Description The Machinist Pipeline Program was created by the Prototype Fabrication Division to fill a critical need for skilled journeyworker machinists. It is based on a program developed by the National Institute for Metalworking Skills (NIMS) in conjunction with metalworking trade associations to develop and maintain a globally competitive U.S. workforce. The goal is to develop and implement apprenticeship programs that are aligned with

    14. Exploratory Experimentation and Computation

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2010-02-25

      We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

    15. GPU Computational Screening

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      GPU Computational Screening of Carbon Capture Materials J. Kim 1 , A Koniges 1 , R. Martin 1 , M. Haranczyk 1 , J. Swisher 2 , and B. Smit 1,2 1 Lawrence Berkeley National Laboratory, Berkeley, CA 94720 2 Department of Chemical Engineering, University of California, Berkeley, Berkeley, CA 94720 E-mail: jihankim@lbl.gov Abstract. In order to reduce the current costs associated with carbon capture technologies, novel materials such as zeolites and metal-organic frameworks that are based on

    16. Methods and apparatus using commutative error detection values for fault isolation in multiple node computers

      DOE Patents [OSTI]

      Almasi, Gheorghe [Ardsley, NY; Blumrich, Matthias Augustin [Ridgefield, CT; Chen, Dong [Croton-On-Hudson, NY; Coteus, Paul [Yorktown, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E. [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk I. [Ossining, NY; Singh, Sarabjeet [Mississauga, CA; Steinmacher-Burow, Burkhard D. [Wernau, DE; Takken, Todd [Brewster, NY; Vranas, Pavlos [Bedford Hills, NY

      2008-06-03

      Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored in memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.

    17. Failure detection in high-performance clusters and computers using chaotic map computations

      DOE Patents [OSTI]

      Rao, Nageswara S.

      2015-09-01

      A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

    18. Computer System, Cluster, and Networking Summer Institute Projects

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CSCNSI Projects Computer System, Cluster, and Networking Summer Institute Projects Present and past projects Contacts Program Lead Carolyn Connor (505) 665-9891 Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email 2015 Projects The summer school program was held June 1-July 31, 2015, at the National Security Education Center (NSEC) and New Mexico Consortium (NMC). Class of 2015 2015-si-group Back Row L-R: Matthew Broomfield (instructor), Gustavo Rayos, Destiny

    19. Multithreaded processor architecture for parallel symbolic computation. Technical report

      SciTech Connect (OSTI)

      Fujita, T.

      1987-09-01

      This paper describes the Multilisp Architecture for Symbolic Applications (MASA), which is a multithreaded processor architecture for parallel symbolic computation with various features intended for effective Multilisp program execution. The principal mechanisms exploited for this processor are multiple contexts, interleaved pipeline execution from separate instruction streams, and synchronization based on a bit in each memory cell. The tagged architecture approach is taken for Lisp program execution, and trap conditions are provided for future object manipulation and garbage collection.

    20. Methods for operating parallel computing systems employing sequenced communications

      DOE Patents [OSTI]

      Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

      1999-01-01

      A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

    1. Methods for operating parallel computing systems employing sequenced communications

      DOE Patents [OSTI]

      Benner, R.E.; Gustafson, J.L.; Montry, G.R.

      1999-08-10

      A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

    2. Menu Driven Program Determining Properties of Aqueous Lithium Bromide Solutions

      Energy Science and Technology Software Center (OSTI)

      1992-12-09

      LIMENU is a menu driven program written to compute seven physical properties of a lithium bromide-water solution and three physical properties of water, and to display two plots.

    3. ACToR - Aggregated Computational Toxicology Resource

      SciTech Connect (OSTI)

      Judson, Richard Richard, Ann; Dix, David; Houck, Keith; Elloumi, Fathi; Martin, Matthew; Cathey, Tommy; Transue, Thomas R.; Spencer, Richard; Wolf, Maritja

      2008-11-15

      ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food and Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Center for Computational Toxicology, ACToR helps manage large data sets being used in a high-throughput environmental chemical screening and prioritization program called ToxCast{sup TM}.

    4. High Performance Computing at the Oak Ridge Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing at the Oak Ridge Leadership Computing Facility Go to Menu Page 2 Outline * Our Mission * Computer Systems: Present, Past, Future * Challenges Along the Way * Resources for Users Go to Menu Page 3 Our Mission Go to Menu Page 4 * World's most powerful computing facility * Nation's largest concentration of open source materials research * $1.3B budget * 4,250 employees * 3,900 research guests annually * $350 million invested in modernization * Nation's most diverse energy

    5. High performance computing and communications: FY 1995 implementation plan

      SciTech Connect (OSTI)

      1994-04-01

      The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

    6. High School Internship Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High School Internship Program High School Internship Program Point your career towards Los Alamos Lab: work with the best minds on the planet in an inclusive environment that is rich in intellectual vitality and opportunities for growth. Contact Program Manager Scott Robbins Student Programs (505) 667-3639 Email Program Coordinator Brenda Montoya Student Programs (505) 667-4866 Email Opportunities for Northern New Mexico high school seniors The High School Internship Program provides qualified

    7. Optimization Using Metamodeling in the Context of Integrated Computational Materials Engineering (ICME)

      SciTech Connect (OSTI)

      Hammi, Youssef; Horstemeyer, Mark F; Wang, Paul; David, Francis; Carino, Ricolindo

      2013-11-18

      Predictive Design Technologies, LLC (PDT) proposed to employ Integrated Computational Materials Engineering (ICME) tools to help the manufacturing industry in the United States regain the competitive advantage in the global economy. ICME uses computational materials science tools within a holistic system in order to accelerate materials development, improve design optimization, and unify design and manufacturing. With the advent of accurate modeling and simulation along with significant increases in high performance computing (HPC) power, virtual design and manufacturing using ICME tools provide the means to reduce product development time and cost by alleviating costly trial-and-error physical design iterations while improving overall quality and manufacturing efficiency. To reduce the computational cost necessary for the large-scale HPC simulations and to make the methodology accessible for small and medium-sized manufacturers (SMMs), metamodels are employed. Metamodels are approximate models (functional relationships between input and output variables) that can reduce the simulation times by one to two orders of magnitude. In Phase I, PDT, partnered with Mississippi State University (MSU), demonstrated the feasibility of the proposed methodology by employing MSU?s internal state variable (ISV) plasticity-damage model with the help of metamodels to optimize the microstructure-process-property-cost for tube manufacturing processes used by Plymouth Tube Company (PTC), which involves complicated temperature and mechanical loading histories. PDT quantified the microstructure-property relationships for PTC?s SAE J525 electric resistance-welded cold drawn low carbon hydraulic 1010 steel tube manufacturing processes at seven different material states and calibrated the ISV plasticity material parameters to fit experimental tensile stress-strain curves. PDT successfully performed large scale finite element (FE) simulations in an HPC environment using the ISV plasticity model in Abaqus FE analyses of the tube forming, sizing, drawing, welding, and normalizing processes. The simulation results coupled with the manufacturing cost data were used to develop prototype metamodeling (quick response) codes which could be used to predict and optimize the microstructure-process-property-cost relationships. The developed ICME metamodeling toolkits are flexible enough to be applied to other manufacturing processes (e.g. forging, forming, casting, extrusion, rolling, stamping, and welding/joining) and metamodeling codes can run on laptop computers. Based on the work completed in Phase I, in Phase II, PDT proposes to continue to refine the ISV model by correlating and incorporating the uncertainties in the microstructure, mechanical testing, and modeling. Following the model refinement, FE analyses will be simulated and will provide even more realistic predictions as they include an appropriate window of uncertainty. Using the HPC output (FE analyses) as input, the quick-response metamodel codes will more accurately predict and optimize the microstructure-process-property-cost relationships. Furthermore, PDT propose to employ the ICME metamodeling toolkits to help develop a new tube product using entirely new high strength steel. The modeling of the high strength steel manufacturing process will replace the costly and time consuming trial-and-error methods that were used in the tubing industry previously. This simulation-based process prototyping will greatly benefit our industrial partners by opening up new market spaces due to new products with greater capabilities.

    8. U.S. Forest Service's Power-IT-Down Program

      SciTech Connect (OSTI)

      2016-01-01

      Case study describes the U.S. Forest Service's Power-IT-Down Program, which strongly encouraged employees to shut off their computers when leaving the office. The U.S. Forest Service first piloted the program on a voluntary basis in one region then implemented it across the agency's 43,000 computers as a joint effort by the Chief Information Office and Sustainable Operations department.

    9. Computational Modeling & Simulation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear Energy Nuclear

    10. [Computer Science and Telecommunications Board activities

      SciTech Connect (OSTI)

      Blumenthal, M.S.

      1993-02-23

      The board considers technical and policy issues pertaining to computer science, telecommunications, and associated technologies. Functions include providing a base of expertise for these fields in NRC, monitoring and promoting health of these fields, initiating studies of these fields as critical resources and sources of national economic strength, responding to requests for advice, and fostering interaction among the technologies and the other pure and applied science and technology. This document describes its major accomplishments, current programs, other sponsored activities, cooperative ventures, and plans and prospects.

    11. Classified Automated Information System Security Program

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1994-07-15

      To establish uniform requirements, policies, responsibilities, and procedures for the development and implementation of a Department of Energy (DOE) Classified Computer Security Program to ensure the security of classified information in automated data processing (ADP) systems. Cancels DOE O 5637.1. Canceled by DOE O 471.2.

    12. computational-hydraulics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne, Illinois Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. A training course in the use of computational hydraulics and aerodynamics CFD software using CD-adapco's STAR-CCM+ for analysis will be held at TRACC from March 21-22, 2012. The course assumes a basic knowledge of fluid mechanics and will make extensive use of hands on tutorials. CD-adapco will issue

    13. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, William C.

      1998-01-01

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them.

    14. Scanning computed confocal imager

      DOE Patents [OSTI]

      George, John S. (Los Alamos, NM)

      2000-03-14

      There is provided a confocal imager comprising a light source emitting a light, with a light modulator in optical communication with the light source for varying the spatial and temporal pattern of the light. A beam splitter receives the scanned light and direct the scanned light onto a target and pass light reflected from the target to a video capturing device for receiving the reflected light and transferring a digital image of the reflected light to a computer for creating a virtual aperture and outputting the digital image. In a transmissive mode of operation the invention omits the beam splitter means and captures light passed through the target.

    15. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, W.C.

      1998-03-17

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers is disclosed. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them. 5 figs.

    16. Announcement of Computer Software

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      F 241.4 (10-01) (Replaces ESTSC F1 and ESTSC F2) All Other Editions Are Obsolete UNITED STATES DEPARTMENT OF ENERGY ANNOUNCEMENT OF COMPUTER SOFTWARE OMB Control Number 1910-1400 (OMB Burden Disclosure Statement is on last page of Instructions) Record Status (Select One): New Package Software Revision H. Description/Abstract PART I: STI SOFTWARE DESCRIPTION A. Software Title SHORT NAME OR ACRONYM KEYWORDS IN CONTEXT (KWIC) TITLE B. Developer(s) E-MAIL ADDRESS(ES) C. Site Product Number 1. DOE

    17. Computer Wallpaper | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Wallpaper We've incorporated the tagline, Creating Materials and Energy Solutions, into a computer wallpaper so you can display it on your desktop as a constant reminder....

    18. Introduction to High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Introduction to High Performance Computing Introduction to High Performance Computing June 10, 2013 Photo on 7 30 12 at 7.10 AM Downloads Download File Gerber-HPC-2.pdf...

    19. Fermilab | Science at Fermilab | Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Computing is indispensable to science at Fermilab. High-energy physics experiments generate an astounding amount of data that physicists need to store, analyze and communicate with others. Cutting-edge technology allows scientists to work quickly and efficiently to advance our understanding of the world . Fermilab's Computing Division is recognized for its expertise in handling huge amounts of data, its success in high-speed parallel computing and its willingness to take its craft in

    20. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse osmosis to "super purify" water allows the system to reuse water and cool down our powerful yet thirsty computers. January 30, 2014 Super recycled water: quenching computers LANL's Sanitary Effluent Reclamation Facility, key to reducing the Lab's discharge of liquid. Millions of gallons of industrial