National Library of Energy BETA

Sample records for abaqus computer program

  1. Visualizing MCNP Tally Segment Geometry and Coupling Results with ABAQUS

    SciTech Connect (OSTI)

    J. R. Parry; J. A. Galbraith

    2007-11-01

    The Advanced Graphite Creep test, AGC-1, is planned for irradiation in the Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant program. The experiment requires very detailed neutronics and thermal hydraulics analyses to show compliance with programmatic and ATR safety requirements. The MCNP model used for the neutronics analysis required hundreds of tally regions to provide the desired detail. A method for visualizing the hundreds of tally region geometries and the tally region results in 3 dimensions has been created to support the AGC-1 irradiation. Additionally, a method was created which would allow ABAQUS to access the results directly for the thermal analysis of the AGC-1 experiment.

  2. Developing an Abaqus *HYPERFOAM Model for M9747 (4003047) Cellular Silicone Foam

    SciTech Connect (OSTI)

    Siranosian, Antranik A.; Stevens, R. Robert

    2012-04-26

    This report documents work done to develop an Abaqus *HYPERFOAM hyperelastic model for M9747 (4003047) cellular silicone foam for use in quasi-static analyses at ambient temperature. Experimental data, from acceptance tests for 'Pad A' conducted at the Kansas City Plant (KCP), was used to calibrate the model. The data includes gap (relative displacement) and load measurements from three locations on the pad. Thirteen sets of data, from pads with different serial numbers, were provided. The thirty-nine gap-load curves were extracted from the thirteen supplied Excel spreadsheets and analyzed, and from those thirty-nine one set of data, representing a qualitative mean, was chosen to calibrate the model. The data was converted from gap and load to nominal (engineering) strain and nominal stress in order to implement it in Abaqus. Strain computations required initial pad thickness estimates. An Abaqus model of a right-circular cylinder was used to evaluate and calibrate the *HYPERFOAM model.

  3. Enhancing the ABAQUS thermomechanics code to simulate multipellet steady and transient LWR fuel rod behavior

    SciTech Connect (OSTI)

    R. L. Williamson

    2011-08-01

    A powerful multidimensional fuels performance analysis capability, applicable to both steady and transient fuel behavior, is developed based on enhancements to the commercially available ABAQUS general-purpose thermomechanics code. Enhanced capabilities are described, including: UO2 temperature and burnup dependent thermal properties, solid and gaseous fission product swelling, fuel densification, fission gas release, cladding thermal and irradiation creep, cladding irradiation growth, gap heat transfer, and gap/plenum gas behavior during irradiation. This new capability is demonstrated using a 2D axisymmetric analysis of the upper section of a simplified multipellet fuel rod, during both steady and transient operation. Comparisons are made between discrete and smeared-pellet simulations. Computational results demonstrate the importance of a multidimensional, multipellet, fully-coupled thermomechanical approach. Interestingly, many of the inherent deficiencies in existing fuel performance codes (e.g., 1D thermomechanics, loose thermomechanical coupling, separate steady and transient analysis, cumbersome pre- and post-processing) are, in fact, ABAQUS strengths.

  4. Enhancing the ABAQUS Thermomechanics Code to Simulate Steady and Transient Fuel Rod Behavior

    SciTech Connect (OSTI)

    R. L. Williamson; D. A. Knoll

    2009-09-01

    A powerful multidimensional fuels performance capability, applicable to both steady and transient fuel behavior, is developed based on enhancements to the commercially available ABAQUS general-purpose thermomechanics code. Enhanced capabilities are described, including: UO2 temperature and burnup dependent thermal properties, solid and gaseous fission product swelling, fuel densification, fission gas release, cladding thermal and irradiation creep, cladding irradiation growth , gap heat transfer, and gap/plenum gas behavior during irradiation. The various modeling capabilities are demonstrated using a 2D axisymmetric analysis of the upper section of a simplified multi-pellet fuel rod, during both steady and transient operation. Computational results demonstrate the importance of a multidimensional fully-coupled thermomechanics treatment. Interestingly, many of the inherent deficiencies in existing fuel performance codes (e.g., 1D thermomechanics, loose thermo-mechanical coupling, separate steady and transient analysis, cumbersome pre- and post-processing) are, in fact, ABAQUS strengths.

  5. ALCC Program | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Projects Publications ALCF Tech Reports Industry Collaborations ALCC Program ASCR Leadership Computing Challenge (ALCC) Program The ALCC program allocates resources to projects...

  6. Director's Discretionary (DD) Program | Argonne Leadership Computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Science at ALCF Allocation Programs INCITE Program ALCC Program Director's Discretionary ... working toward an INCITE or ALCC allocation to help them achieve computational readiness. ...

  7. Advanced Simulation and Computing Program

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Advanced Simulation and Computing (ASC) Program Unstable intermixing of heavy (sulfur hexafluoride) and light fluid (air). Show Caption Turbulence generated by unstable fluid flow. Show Caption Examining the effects of a one-megaton nuclear energy source detonated on the surface of an asteroid. Show Caption Los Alamos National Laboratory is home to two of the world's most powerful supercomputers, each capable of performing more than 1,000 trillion operations per second. The newer one, Cielo, was

  8. Radiological Safety Analysis Computer Program

    Energy Science and Technology Software Center (OSTI)

    2001-08-28

    RSAC-6 is the latest version of the RSAC program. It calculates the consequences of a release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory; decay and in-grow the inventory during transport through processes, facilities, and the environment; model the downwind dispersion of the activity; and calculate doses to downwind individuals. Internal dose from the inhalation and ingestion pathways is calculated. External dose from ground surface andmore » plume gamma pathways is calculated. New and exciting updates to the program include the ability to evaluate a release to an enclosed room, resuspension of deposited activity and evaluation of a release up to 1 meter from the release point. Enhanced tools are included for dry deposition, building wake, occupancy factors, respirable fraction, AMAD adjustment, updated and enhanced radionuclide inventory and inclusion of the dose-conversion factors from FOR 11 and 12.« less

  9. INCITE Program | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    INCITE Program Innovative and Novel Computational Impact on Theory and Experiment (INCITE) Program The INCITE program provides allocations to computationally intensive, large-scale research projects that aim to address "grand challenges" in science and engineering. The program conducts a two-part review of all proposals: a peer review by an international panel of experts and a computational-readiness review. The annual call for proposals is issued in April and the allocations are

  10. Calibrating the Abaqus Crushable Foam Material Model using UNM Data

    SciTech Connect (OSTI)

    Schembri, Philip E.; Lewis, Matthew W.

    2014-02-27

    Triaxial test data from the University of New Mexico and uniaxial test data from W-14 is used to calibrate the Abaqus crushable foam material model to represent the syntactic foam comprised of APO-BMI matrix and carbon microballoons used in the W76. The material model is an elasto-plasticity model in which the yield strength depends on pressure. Both the elastic properties and the yield stress are estimated by fitting a line to the elastic region of each test response. The model parameters are fit to the data (in a non-rigorous way) to provide both a conservative and not-conservative material model. The model is verified to perform as intended by comparing the values of pressure and shear stress at yield, as well as the shear and volumetric stress-strain response, to the test data.

  11. Debugging a high performance computing program

    DOE Patents [OSTI]

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  12. Debugging a high performance computing program

    DOE Patents [OSTI]

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  13. Programs | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Early Science Program INCITE Program ALCC Program Director's Discretionary (DD) Program ALCF Data Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Featured Science Reactive MD Simulations of Electrochemical Oxide Interfaces at Mesoscale Subramanian Sankaranarayanan Allocation Program: INCITE Allocation Hours: 40 Million Addressing Challenges As a DOE Office of Science User Facility dedicated to open

  14. ADP computer security classification program

    SciTech Connect (OSTI)

    Augustson, S.J.

    1984-01-01

    CG-ADP-1, the Automatic Data Processing Security Classification Guide, provides for classification guidance (for security information) concerning the protection of Department of Energy (DOE) and DOE contractor Automatic Data Processing (ADP) systems which handle classified information. Within the DOE, ADP facilities that process classified information provide potentially lucrative targets for compromise. In conjunction with the security measures required by DOE regulations, necessary precautions must be taken to protect details of those ADP security measures which could aid in their own subversion. Accordingly, the basic principle underlying ADP security classification policy is to protect information which could be of significant assistance in gaining unauthorized access to classified information being processed at an ADP facility. Given this policy, classification topics and guidelines are approved for implementation. The basic program guide, CG-ADP-1 is broad in scope and based upon it, more detailed local guides are sometimes developed and approved for specific sites. Classification topics are provided for system features, system and security management, and passwords. Site-specific topics can be addressed in local guides if needed.

  15. 2014 call for NERSC's Data Intensive Computing Pilot Program...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NERSC's Data Intensive Computing Pilot Program 2014 call for NERSC's Data Intensive Computing Pilot Program Due December 10 November 18, 2013 by Francesca Verdier (0 Comments)...

  16. ORISE Resources: Equal Access Initiative Computer Grants Program

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Equal Access Initiative Computer Grants Program The Equal Access Initiative Computer Grants Program is sponsored by the National Minority AIDS Council (NMAC) and the National...

  17. Application of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the...

  18. Finite Volume Based Computer Program for Ground Source Heat Pump...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Finite Volume Based Computer Program for Ground Source Heat Pump Systems Finite Volume Based Computer Program for Ground Source Heat Pump Systems Project objective: Create a new ...

  19. The Computational Physics Program of the national MFE Computer Center

    SciTech Connect (OSTI)

    Mirin, A.A.

    1989-01-01

    Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.

  20. Refurbishment program of HANARO control computer system

    SciTech Connect (OSTI)

    Kim, H. K.; Choe, Y. S.; Lee, M. W.; Doo, S. K.; Jung, H. S.

    2012-07-01

    HANARO, an open-tank-in-pool type research reactor with 30 MW thermal power, achieved its first criticality in 1995. The programmable controller system MLC (Multi Loop Controller) manufactured by MOORE has been used to control and regulate HANARO since 1995. We made a plan to replace the control computer because the system supplier no longer provided technical support and thus no spare parts were available. Aged and obsolete equipment and the shortage of spare parts supply could have caused great problems. The first consideration for a replacement of the control computer dates back to 2007. The supplier did not produce the components of MLC so that this system would no longer be guaranteed. We established the upgrade and refurbishment program in 2009 so as to keep HANARO up to date in terms of safety. We designed the new control computer system that would replace MLC. The new computer system is HCCS (HANARO Control Computer System). The refurbishing activity is in progress and will finish in 2013. The goal of the refurbishment program is a functional replacement of the reactor control system in consideration of suitable interfaces, compliance with no special outage for installation and commissioning, and no change of the well-proved operation philosophy. HCCS is a DCS (Discrete Control System) using PLC manufactured by RTP. To enhance the reliability, we adapt a triple processor system, double I/O system and hot swapping function. This paper describes the refurbishment program of the HANARO control system including the design requirements of HCCS. (authors)

  1. Computer System, Cluster, and Networking Summer Institute Program Description

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System, Cluster, and Networking Summer Institute Program Description The Computer System, Cluster, and Networking Summer Institute (CSCNSI) is a focused technical enrichment program targeting third-year college undergraduate students currently engaged in a computer science, computer engineering, or similar major. The program emphasizes practical skill development in setting up, configuring, administering, testing, monitoring, and scheduling computer systems, supercomputer clusters, and computer

  2. The computational physics program of the National MFE Computer Center

    SciTech Connect (OSTI)

    Mirin, A.A.

    1988-01-01

    The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generation of supercomputers. The computational physics group is involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to compact toroids. Another major area is the investigation of kinetic instabilities using a 3-D particle code. This work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence are being examined. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers.

  3. Computer System, Cluster, and Networking Summer Institute Program...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    is a focused technical enrichment program targeting third-year college undergraduate students currently engaged in a computer science, computer engineering, or similar major. ...

  4. ALCF Data Science Program | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ALCF Data Science Program The ALCF Data Science Program (ADSP) is targeted at "big data" science problems that require the scale and performance of leadership computing resources. ...

  5. Seventy Years of Computing in the Nuclear Weapons Program

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Seventy Years of Computing in the Nuclear Weapons Program Seventy Years of Computing in the Nuclear Weapons Program WHEN: Jan 13, 2015 7:30 PM - 8:00 PM WHERE: Fuller Lodge Central ...

  6. Intro to computer programming, no computer required! | Argonne...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... "Computational thinking requires you to think in abstractions," said Papka, who spoke to computer science and computer-aided design students at Kaneland High School in Maple Park about ...

  7. An Information Dependant Computer Program for Engine Exhaust Heat Recovery

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    for Heating | Department of Energy An Information Dependant Computer Program for Engine Exhaust Heat Recovery for Heating An Information Dependant Computer Program for Engine Exhaust Heat Recovery for Heating A computer program was developed to help engineers at rural Alaskan village power plants to quickly evaluate how to use exhaust waste heat from individual diesel power plants. deer09_avadhanula.pdf (95.11 KB) More Documents & Publications Modular Low Cost High Energy Exhaust Heat

  8. Mira Early Science Program | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    HPC architectures. Together, the 16 projects span a diverse range of scientific fields, numerical methods, programming models, and computational approaches. The latter include...

  9. An Information Dependant Computer Program for Engine Exhaust...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    A computer program was developed to help engineers at rural Alaskan village power plants to quickly evaluate how to use exhaust waste heat from individual diesel power plants. ...

  10. Argonne Training Program on Extreme-Scale Computing Scheduled...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    This program provides intensive hands-on training on the key skills, approaches and tools to design, implement, and execute computational science and engineering applications on ...

  11. Early Science Program | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    In addition to fostering application readiness, the ESP allows researchers to pursue innovative computational science projects not possible on today's leadership-class ...

  12. Method and computer program product for maintenance and modernization backlogging

    DOE Patents [OSTI]

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  13. A Computer Program For Speciation Calculation.

    Energy Science and Technology Software Center (OSTI)

    1990-11-21

    Version: 00 WHATIF-AQ is part of a family of programs for calculations of geochemistry in the near-field of radioactive waste with temperature gradients.

  14. Parallel Programming with MPI | Argonne Leadership Computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Balaji, MCS Rajeev Thakur, MCS Ken Raffenetti, MCS Halim Amer, MCS Event Website: https:www.mcs.anl.gov%7Eraffenetpermalinksargonne16mpi.php The Mathematics and Computer ...

  15. Seventy Years of Computing in the Nuclear Weapons Program

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Seventy Years of Computing in the Nuclear Weapons Program Seventy Years of Computing in the Nuclear Weapons Program WHEN: Jan 13, 2015 7:30 PM - 8:00 PM WHERE: Fuller Lodge Central Avenue, Los Alamos, NM, USA SPEAKER: Bill Archer of the Weapons Physics (ADX) Directorate CONTACT: Bill Archer 505 665 7235 CATEGORY: Science INTERNAL: Calendar Login Event Description Rich history of computing in the Laboratory's weapons program. The talk is free and open to the public and is part of the 2014-15 Los

  16. Applicaiton of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop...

  17. UFO (UnFold Operator) computer program abstract

    SciTech Connect (OSTI)

    Kissel, L.; Biggs, F.

    1982-11-01

    UFO (UnFold Operator) is an interactive user-oriented computer program designed to solve a wide range of problems commonly encountered in physical measurements. This document provides a summary of the capabilities of version 3A of UFO.

  18. Computer programs for multilocus haplotyping of general pedigrees

    SciTech Connect (OSTI)

    Weeks, D.E.; O`Connell, J.R.; Sobel, E.

    1995-06-01

    We have recently developed and implemented three different computer algorithms for accurate haplotyping with large numbers of codominant markers. Each of these algorithms employs likelihood criteria that correctly incorporate all intermarker recombination fractions. The three programs, HAPLO, SIMCROSS, and SIMWALK, are now available for haplotying general pedigrees. The HAPLO program will be distributed as part of the Programs for Pedigree Analysis package by Kenneth Lange. The SIMCROSS and SIMWALK programs are available by anonymous ftp from watson.hgen.pitt.edu. Each program is written in FORTRAN 77 and is distributed as source code. 15 refs.

  19. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  20. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    SciTech Connect (OSTI)

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  1. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    SciTech Connect (OSTI)

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  2. Computer programs for eddy-current defect studies

    SciTech Connect (OSTI)

    Pate, J. R.; Dodd, C. V.

    1990-06-01

    Several computer programs to aid in the design of eddy-current tests and probes have been written. The programs, written in Fortran, deal in various ways with the response to defects exhibited by four types of probes: the pancake probe, the reflection probe, the circumferential boreside probe, and the circumferential encircling probe. Programs are included which calculate the impedance or voltage change in a coil due to a defect, which calculate and plot the defect sensitivity factor of a coil, and which invert calculated or experimental readings to obtain the size of a defect. The theory upon which the programs are based is the Burrows point defect theory, and thus the calculations of the programs will be more accurate for small defects. 6 refs., 21 figs.

  3. Application and implementation of transient algorithms in computer programs

    SciTech Connect (OSTI)

    Benson, D.J.

    1985-07-01

    This presentation gives a brief introduction to the nonlinear finite element programs developed at Lawrence Livermore National Laboratory by the Methods Development Group in the Mechanical Engineering Department. The four programs are DYNA3D and DYNA2D, which are explicit hydrocodes, and NIKE3D and NIKE2D, which are implicit programs. The presentation concentrates on DYNA3D with asides about the other programs. During the past year several new features were added to DYNA3D, and major improvements were made in the computational efficiency of the shell and beam elements. Most of these new features and improvements will eventually make their way into the other programs. The emphasis in our computational mechanics effort has always been, and continues to be, efficiency. To get the most out of our supercomputers, all Crays, we have vectorized the programs as much as possible. Several of the more interesting capabilities of DYNA3D will be described and their impact on efficiency will be discussed. Some of the recent work on NIKE3D and NIKE2D will also be presented. In the belief that a single example is worth a thousand equations, we are skipping the theory entirely and going directly to the examples.

  4. Multiple-comparison computer program using the bonferroni t statistic

    SciTech Connect (OSTI)

    Johnson, E. E.

    1980-11-13

    To ascertain the agreement among laboratories, samples from a single batch of material are analyzed by the different laboratories and results are then compared. A graphical format was designed for presenting the results and for showing which laboratories have significantly different results. The appropriate statistic for simultaneously testing the significance of the differences between several means is Bonferroni t. A computer program was written to make the tests between means based on Bonferroni t and also to make multiple comparisons of the standard deviations associated with the means. The program plots the results and indicates means and standard deviations which are significantly different.

  5. Final Report: Center for Programming Models for Scalable Parallel Computing

    SciTech Connect (OSTI)

    Mellor-Crummey, John

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  6. PET computer programs for use with the 88-inch cyclotron

    SciTech Connect (OSTI)

    Gough, R.A.; Chlosta, L.

    1981-06-01

    This report describes in detail several offline programs written for the PET computer which provide an efficient data management system to assist with the operation of the 88-Inch Cyclotron. This function includes the capability to predict settings for all cyclotron and beam line parameters for all beams within the present operating domain of the facility. The establishment of a data base for operational records is also described from which various aspects of the operating history can be projected.

  7. About the ASCR Computer Science Program | U.S. DOE Office of Science (SC)

    Office of Science (SC) Website

    About the ASCR Computer Science Program Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee

  8. final report for Center for Programming Models for Scalable Parallel Computing

    SciTech Connect (OSTI)

    Johnson, Ralph E

    2013-04-10

    This is the final report of the work on parallel programming patterns that was part of the Center for Programming Models for Scalable Parallel Computing

  9. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect (OSTI)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  10. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    SciTech Connect (OSTI)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  11. Viscosity index calculated by program in GW-basic for personal computers

    SciTech Connect (OSTI)

    Anaya, C.; Bermudez, O. )

    1988-12-26

    A computer program has been developed to calculate the viscosity index of oils when viscosities at two temperatures are known.

  12. Advanced Simulation and Computing and Institutional R&D Programs | National

    National Nuclear Security Administration (NNSA)

    Nuclear Security Administration | (NNSA) Programs Advanced Simulation and Computing and Institutional R&D Programs The Advanced Simulation and Computing (ASC) Program supports the Department of Energy's National Nuclear Security Administration (DOE/NNSA) Defense Programs' use of simulation-based evaluation of the nation's nuclear weapons stockpile. The ASC Program is responsible for providing the simulation tools and computing environments required to qualify and certify the nation's

  13. Computer Science Program | U.S. DOE Office of Science (SC)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer Science Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community

  14. Scientific and Computational Challenges of the Fusion Simulation Program (FSP)

    SciTech Connect (OSTI)

    William M. Tang

    2011-02-09

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  15. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing and Storage Requirements Computing and Storage Requirements for FES J. Candy General Atomics, San Diego, CA Presented at DOE Technical Program Review Hilton Washington DC/Rockville Rockville, MD 19-20 March 2013 2 Computing and Storage Requirements Drift waves and tokamak plasma turbulence Role in the context of fusion research * Plasma performance: In tokamak plasmas, performance is limited by turbulent radial transport of both energy and particles. * Gradient-driven: This turbulent

  16. 2014 call for NERSC's Data Intensive Computing Pilot Program Due December

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    10 NERSC's Data Intensive Computing Pilot Program 2014 call for NERSC's Data Intensive Computing Pilot Program Due December 10 November 18, 2013 by Francesca Verdier NERSC's Data Intensive Computing Pilot Program is now open for its second round of allocations to projects in data intensive science. This pilot aims to support and enable scientists to tackle their most demanding data intensive challenges. Selected projects will be piloting new methods and technologies targeting data

  17. DOE High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Fund New Proposals To Jumpstart Energy Technologies | Department of Energy High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To Fund New Proposals To Jumpstart Energy Technologies DOE High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To Fund New Proposals To Jumpstart Energy Technologies March 18, 2016 - 3:31pm Addthis News release from Lawrence Livermore National Laboratory, March 17 2016 LIVERMORE, Calif - A new U.S. Department of Energy (DOE) program

  18. The ENERGY-10 design-tool computer program

    SciTech Connect (OSTI)

    Balcomb, J.D.; Crowder, R.S. III.

    1995-11-01

    ENERGY-10 is a PC-based building energy simulation program for smaller commercial and institutional buildings that is specifically designed to evaluate energy-efficient features in the very early stages of the architectural design process. Developed specifically as a design tool, the program makes it easy to evaluate the integration of daylighting, passive solar design, low-energy cooling, and energy-efficient equipment into high-performance buildings. The simulation engines perform whole-building energy analysis for 8760 hours per year including both daylighting and dynamic thermal calculations. The primary target audience for the program is building designers, especially architects, but also includes HVAC engineers, utility officials, and architecture and engineering students and professors.

  19. Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities

    Broader source: Energy.gov [DOE]

    Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop October 25, 2011

  20. Workshop on programming languages for high performance computing (HPCWPL): final report.

    SciTech Connect (OSTI)

    Murphy, Richard C.

    2007-05-01

    This report summarizes the deliberations and conclusions of the Workshop on Programming Languages for High Performance Computing (HPCWPL) held at the Sandia CSRI facility in Albuquerque, NM on December 12-13, 2006.

  1. Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software

    SciTech Connect (OSTI)

    Lee, J.R.

    1998-11-01

    As computing assumes a more central role in managing the nuclear stockpile, the consequences of an erroneous computer simulation could be severe. Computational failures are common in other endeavors and have caused project failures, significant economic loss, and loss of life. This report examines the causes of software failure and proposes steps to mitigate them. A formal verification and validation program for scientific software is recommended and described.

  2. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management Programs Advanced Nuclear Energy

  3. Wind energy conversion system analysis model (WECSAM) computer program documentation

    SciTech Connect (OSTI)

    Downey, W T; Hendrick, P L

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation. Thus, any user-supplied data for WECS performance, application load, utility rates, or wind resource may be entered into the scratch file to override the default data-base value. After the model and the inputs required from the user and derived from the data base are described, the model output and the various output options that can be exercised by the user are detailed. The general operation is set forth and suggestions are made for efficient modes of operation. Sample listings of various input, output, and data-base files are appended. (LEW)

  4. High performance computing and communications grand challenges program

    SciTech Connect (OSTI)

    Solomon, J.E.; Barr, A.; Chandy, K.M.; Goddard, W.A., III; Kesselman, C.

    1994-10-01

    The so-called protein folding problem has numerous aspects, however it is principally concerned with the {ital de novo} prediction of three-dimensional (3D) structure from the protein primary amino acid sequence, and with the kinetics of the protein folding process. Our current project focuses on the 3D structure prediction problem which has proved to be an elusive goal of molecular biology and biochemistry. The number of local energy minima is exponential in the number of amino acids in the protein. All current methods of 3D structure prediction attempt to alleviate this problem by imposing various constraints that effectively limit the volume of conformational space which must be searched. Our Grand Challenge project consists of two elements: (1) a hierarchical methodology for 3D protein structure prediction; and (2) development of a parallel computing environment, the Protein Folding Workbench, for carrying out a variety of protein structure prediction/modeling computations. During the first three years of this project, we are focusing on the use of two proteins selected from the Brookhaven Protein Data Base (PDB) of known structure to provide validation of our prediction algorithms and their software implementation, both serial and parallel. Both proteins, protein L from {ital peptostreptococcus magnus}, and {ital streptococcal} protein G, are known to bind to IgG, and both have an {alpha} {plus} {beta} sandwich conformation. Although both proteins bind to IgG, they do so at different sites on the immunoglobin and it is of considerable biological interest to understand structurally why this is so. 12 refs., 1 fig.

  5. A computer program to determine the specific power of prismatic-core reactors

    SciTech Connect (OSTI)

    Dobranich, D.

    1987-05-01

    A computer program has been developed to determine the maximum specific power for prismatic-core reactors as a function of maximum allowable fuel temperature, core pressure drop, and coolant velocity. The prismatic-core reactors consist of hexagonally shaped fuel elements grouped together to form a cylindrically shaped core. A gas coolant flows axially through circular channels within the elements, and the fuel is dispersed within the solid element material either as a composite or in the form of coated pellets. Different coolant, fuel, coating, and element materials can be selected to represent different prismatic-core concepts. The computer program allows the user to divide the core into any arbitrary number of axial levels to account for different axial power shapes. An option in the program allows the automatic determination of the core height that results in the maximum specific power. The results of parametric specific power calculations using this program are presented for various reactor concepts.

  6. Example Program and Makefile for BG/Q | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Example Program and Makefile for BG/Q

  7. DITTY - a computer program for calculating population dose integrated over ten thousand years

    SciTech Connect (OSTI)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.

    1986-03-01

    The computer program DITTY (Dose Integrated Over Ten Thousand Years) was developed to determine the collective dose from long term nuclear waste disposal sites resulting from the ground-water pathways. DITTY estimates the time integral of collective dose over a ten-thousand-year period for time-variant radionuclide releases to surface waters, wells, or the atmosphere. This document includes the following information on DITTY: a description of the mathematical models, program designs, data file requirements, input preparation, output interpretations, sample problems, and program-generated diagnostic messages.

  8. Methods, systems, and computer program products for network firewall policy optimization

    DOE Patents [OSTI]

    Fulp, Errin W.; Tarsa, Stephen J.

    2011-10-18

    Methods, systems, and computer program products for firewall policy optimization are disclosed. According to one method, a firewall policy including an ordered list of firewall rules is defined. For each rule, a probability indicating a likelihood of receiving a packet matching the rule is determined. The rules are sorted in order of non-increasing probability in a manner that preserves the firewall policy.

  9. Computing single step operators of logic programming in radial basis function neural networks

    SciTech Connect (OSTI)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  10. Recovery Act: Finite Volume Based Computer Program for Ground Source Heat Pump Systems

    SciTech Connect (OSTI)

    James A Menart, Professor

    2013-02-22

    This report is a compilation of the work that has been done on the grant DE-EE0002805 entitled Finite Volume Based Computer Program for Ground Source Heat Pump Systems. The goal of this project was to develop a detailed computer simulation tool for GSHP (ground source heat pump) heating and cooling systems. Two such tools were developed as part of this DOE (Department of Energy) grant; the first is a two-dimensional computer program called GEO2D and the second is a three-dimensional computer program called GEO3D. Both of these simulation tools provide an extensive array of results to the user. A unique aspect of both these simulation tools is the complete temperature profile information calculated and presented. Complete temperature profiles throughout the ground, casing, tube wall, and fluid are provided as a function of time. The fluid temperatures from and to the heat pump, as a function of time, are also provided. In addition to temperature information, detailed heat rate information at several locations as a function of time is determined. Heat rates between the heat pump and the building indoor environment, between the working fluid and the heat pump, and between the working fluid and the ground are computed. The heat rates between the ground and the working fluid are calculated as a function time and position along the ground loop. The heating and cooling loads of the building being fitted with a GSHP are determined with the computer program developed by DOE called ENERGYPLUS. Lastly COP (coefficient of performance) results as a function of time are provided. Both the two-dimensional and three-dimensional computer programs developed as part of this work are based upon a detailed finite volume solution of the energy equation for the ground and ground loop. Real heat pump characteristics are entered into the program and used to model the heat pump performance. Thus these computer tools simulate the coupled performance of the ground loop and the heat pump. The

  11. Finite Volume Based Computer Program for Ground Source Heat Pump System

    SciTech Connect (OSTI)

    Menart, James A.

    2013-02-22

    This report is a compilation of the work that has been done on the grant DE-EE0002805 entitled ?Finite Volume Based Computer Program for Ground Source Heat Pump Systems.? The goal of this project was to develop a detailed computer simulation tool for GSHP (ground source heat pump) heating and cooling systems. Two such tools were developed as part of this DOE (Department of Energy) grant; the first is a two-dimensional computer program called GEO2D and the second is a three-dimensional computer program called GEO3D. Both of these simulation tools provide an extensive array of results to the user. A unique aspect of both these simulation tools is the complete temperature profile information calculated and presented. Complete temperature profiles throughout the ground, casing, tube wall, and fluid are provided as a function of time. The fluid temperatures from and to the heat pump, as a function of time, are also provided. In addition to temperature information, detailed heat rate information at several locations as a function of time is determined. Heat rates between the heat pump and the building indoor environment, between the working fluid and the heat pump, and between the working fluid and the ground are computed. The heat rates between the ground and the working fluid are calculated as a function time and position along the ground loop. The heating and cooling loads of the building being fitted with a GSHP are determined with the computer program developed by DOE called ENERGYPLUS. Lastly COP (coefficient of performance) results as a function of time are provided. Both the two-dimensional and three-dimensional computer programs developed as part of this work are based upon a detailed finite volume solution of the energy equation for the ground and ground loop. Real heat pump characteristics are entered into the program and used to model the heat pump performance. Thus these computer tools simulate the coupled performance of the ground loop and the heat pump

  12. SNOW: a digital computer program for the simulation of ion beam devices

    SciTech Connect (OSTI)

    Boers, J.E.

    1980-08-01

    A digital computer program, SNOW, has been developed for the simulation of dense ion beams. The program simulates the plasma expansion cup (but not the plasma source itself), the acceleration region, and a drift space with neutralization if desired. The ion beam is simulated by computing representative trajectories through the device. The potentials are simulated on a large rectangular matrix array which is solved by iterative techniques. Poisson's equation is solved at each point within the configuration using space-charge densities computed from the ion trajectories combined with background electron and/or ion distributions. The simulation methods are described in some detail along with examples of both axially-symmetric and rectangular beams. A detailed description of the input data is presented.

  13. The Radiological Safety Analysis Computer Program (RSAC-5) user`s manual. Revision 1

    SciTech Connect (OSTI)

    Wenzel, D.R.

    1994-02-01

    The Radiological Safety Analysis Computer Program (RSAC-5) calculates the consequences of the release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory from either reactor operating history or nuclear criticalities. RSAC-5 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated through the inhalation, immersion, ground surface, and ingestion pathways. RSAC+, a menu-driven companion program to RSAC-5, assists users in creating and running RSAC-5 input files. This user`s manual contains the mathematical models and operating instructions for RSAC-5 and RSAC+. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-5 and RSAC+. These programs are designed for users who are familiar with radiological dose assessment methods.

  14. Towards an Abstraction-Friendly Programming Model for High Productivity and High Performance Computing

    SciTech Connect (OSTI)

    Liao, C; Quinlan, D; Panas, T

    2009-10-06

    General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will

  15. Princeton graduate student Imène Goumiri creates computer program that

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    helps stabilize fusion plasmas | Princeton Plasma Physics Lab Princeton graduate student Imène Goumiri creates computer program that helps stabilize fusion plasmas By John Greenwald and Raphael Rosen April 14, 2016 Tweet Widget Google Plus One Share on Facebook Imène Goumiri, a Princeton University graduate student, has worked with physicists at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) to simulate a method for limiting instabilities that reduce the

  16. User's guide to SERICPAC: A computer program for calculating electric-utility avoided costs rates

    SciTech Connect (OSTI)

    Wirtshafter, R.; Abrash, M.; Koved, M.; Feldman, S.

    1982-05-01

    SERICPAC is a computer program developed to calculate average avoided cost rates for decentralized power producers and cogenerators that sell electricity to electric utilities. SERICPAC works in tandem with SERICOST, a program to calculate avoided costs, and determines the appropriate rates for buying and selling of electricity from electric utilities to qualifying facilities (QF) as stipulated under Section 210 of PURA. SERICPAC contains simulation models for eight technologies including wind, hydro, biogas, and cogeneration. The simulations are converted in a diversified utility production which can be either gross production or net production, which accounts for an internal electricity usage by the QF. The program allows for adjustments to the production to be made for scheduled and forced outages. The final output of the model is a technology-specific average annual rate. The report contains a description of the technologies and the simulations as well as complete user's guide to SERICPAC.

  17. An expert computer program for classifying stars on the MK spectral classification system

    SciTech Connect (OSTI)

    Gray, R. O.; Corbally, C. J.

    2014-04-01

    This paper describes an expert computer program (MKCLASS) designed to classify stellar spectra on the MK Spectral Classification system in a way similar to humans—by direct comparison with the MK classification standards. Like an expert human classifier, the program first comes up with a rough spectral type, and then refines that spectral type by direct comparison with MK standards drawn from a standards library. A number of spectral peculiarities, including barium stars, Ap and Am stars, λ Bootis stars, carbon-rich giants, etc., can be detected and classified by the program. The program also evaluates the quality of the delivered spectral type. The program currently is capable of classifying spectra in the violet-green region in either the rectified or flux-calibrated format, although the accuracy of the flux calibration is not important. We report on tests of MKCLASS on spectra classified by human classifiers; those tests suggest that over the entire HR diagram, MKCLASS will classify in the temperature dimension with a precision of 0.6 spectral subclass, and in the luminosity dimension with a precision of about one half of a luminosity class. These results compare well with human classifiers.

  18. MP Salsa: a finite element computer program for reacting flow problems. Part 1--theoretical development

    SciTech Connect (OSTI)

    Shadid, J.N.; Moffat, H.K.; Hutchinson, S.A.; Hennigan, G.L.; Devine, K.D.; Salinger, A.G.

    1996-05-01

    The theoretical background for the finite element computer program, MPSalsa, is presented in detail. MPSalsa is designed to solve laminar, low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow, heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solver coupled multiple Poisson or advection-diffusion- reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMKIN, respectively. The code employs unstructured meshes, using the EXODUS II finite element data base suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec solver library.

  19. DUPLEX: A molecular mechanics program in torsion angle space for computing structures of DNA and RNA

    SciTech Connect (OSTI)

    Hingerty, B.E.

    1992-07-01

    DUPLEX produces energy minimized structures of DNA and RNA of any base sequence for single and double strands. The smallest subunits are deoxydinucleoside monophosphates, and up to 12 residues, single or double stranded can be treated. In addition, it can incorporate NMR derived interproton distances an constraints in the minimizations. Both upper and lower bounds for these distances can be specified. The program has been designed to run on a UNICOS Cray supercomputer, but should run, albeit slowly, on a laboratory computer such as a VAX or a workstation.

  20. Items Supporting the Hanford Internal Dosimetry Program Implementation of the IMBA Computer Code

    SciTech Connect (OSTI)

    Carbaugh, Eugene H.; Bihl, Donald E.

    2008-01-07

    The Hanford Internal Dosimetry Program has adopted the computer code IMBA (Integrated Modules for Bioassay Analysis) as its primary code for bioassay data evaluation and dose assessment using methodologies of ICRP Publications 60, 66, 67, 68, and 78. The adoption of this code was part of the implementation plan for the June 8, 2007 amendments to 10 CFR 835. This information release includes action items unique to IMBA that were required by PNNL quality assurance standards for implementation of safety software. Copie of the IMBA software verification test plan and the outline of the briefing given to new users are also included.

  1. Princeton graduate student Imène Goumiri creates computer program that

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    helps stabilize fusion plasmas | Princeton Plasma Physics Lab Princeton graduate student Imène Goumiri creates computer program that helps stabilize fusion plasmas By John Greenwald and Raphael Rosen April 14, 2016 Tweet Widget Google Plus One Share on Facebook Imène Goumiri led the design of a controller. (Photo by Elle Starkman/Office of Communications) Imène Goumiri led the design of a controller. Imène Goumiri, a Princeton University graduate student, has worked with physicists at

  2. Method, systems, and computer program products for implementing function-parallel network firewall

    DOE Patents [OSTI]

    Fulp, Errin W.; Farley, Ryan J.

    2011-10-11

    Methods, systems, and computer program products for providing function-parallel firewalls are disclosed. According to one aspect, a function-parallel firewall includes a first firewall node for filtering received packets using a first portion of a rule set including a plurality of rules. The first portion includes less than all of the rules in the rule set. At least one second firewall node filters packets using a second portion of the rule set. The second portion includes at least one rule in the rule set that is not present in the first portion. The first and second portions together include all of the rules in the rule set.

  3. REFLECT: A computer program for the x-ray reflectivity of bent perfect crystals

    SciTech Connect (OSTI)

    Etelaeniemi, V.; Suortti, P.; Thomlinson, W. . Dept. of Physics; Brookhaven National Lab., Upton, NY )

    1989-09-01

    The design of monochromators for x-ray applications, using either standard laboratory sources on synchrotron radiation sources, requires a knowledge of the reflectivity of the crystals. The reflectivity depends on the crystals used, the geometry of the reflection, the energy range of the radiation, and, in the present case, the cylindrical bending radius of the optical device. This report is intended to allow the reader to become familiar with, and therefore use, a computer program called REFLECT which we have used in the design of a dual beam Laue monochromator for synchrotron angiography. The results of REFLECT have been compared to measured reflectivities for both bent Bragg and Laue geometries. The results are excellent and should give full confidence in the use of the program. 6 refs.

  4. THE SAP3 COMPUTER PROGRAM FOR QUANTITATIVE MULTIELEMENT ANALYSIS BY ENERGY DISPERSIVE X-RAY FLUORESCENCE

    SciTech Connect (OSTI)

    Nielson, K. K.; Sanders, R. W.

    1982-04-01

    SAP3 is a dual-function FORTRAN computer program which performs peak analysis of energy-dispersive x-ray fluorescence spectra and then quantitatively interprets the results of the multielement analysis. It was written for mono- or bi-chromatic excitation as from an isotopic or secondary excitation source, and uses the separate incoherent and coherent backscatter intensities to define the bulk sample matrix composition. This composition is used in performing fundamental-parameter matrix corrections for self-absorption, enhancement, and particle-size effects, obviating the need for specific calibrations for a given sample matrix. The generalized calibration is based on a set of thin-film sensitivities, which are stored in a library disk file and used for all sample matrices and thicknesses. Peak overlap factors are also determined from the thin-film standards, and are stored in the library for calculating peak overlap corrections. A detailed description is given of the algorithms and program logic, and the program listing and flow charts are also provided. An auxiliary program, SPCAL, is also given for use in calibrating the backscatter intensities. SAP3 provides numerous analysis options via seventeen control switches which give flexibility in performing the calculations best suited to the sample and the user needs. User input may be limited to the name of the library, the analysis livetime, and the spectrum filename and location. Output includes all peak analysis information, matrix correction factors, and element concentrations, uncertainties and detection limits. Twenty-four elements are typically determined from a 1024-channel spectrum in one-to-two minutes using a PDP-11/34 computer operating under RSX-11M.

  5. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  6. PABLM: a computer program to calculate accumulated radiation doses from radionuclides in the environment

    SciTech Connect (OSTI)

    Napier, B.A.; Kennedy, W.E. Jr.; Soldat, J.K.

    1980-03-01

    A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach.

  7. Open-cycle ocean thermal energy conversion surface-condenser design analysis and computer program

    SciTech Connect (OSTI)

    Panchal, C.B.; Rabas, T.J.

    1991-05-01

    This report documents a computer program for designing a surface condenser that condenses low-pressure steam in an ocean thermal energy conversion (OTEC) power plant. The primary emphasis is on the open-cycle (OC) OTEC power system, although the same condenser design can be used for conventional and hybrid cycles because of their highly similar operating conditions. In an OC-OTEC system, the pressure level is very low (deep vacuums), temperature differences are small, and the inlet noncondensable gas concentrations are high. Because current condenser designs, such as the shell-and-tube, are not adequate for such conditions, a plate-fin configuration is selected. This design can be implemented in aluminum, which makes it very cost-effective when compared with other state-of-the-art vacuum steam condenser designs. Support for selecting a plate-fin heat exchanger for OC-OTEC steam condensation can be found in the sizing (geometric details) and rating (heat transfer and pressure drop) calculations presented. These calculations are then used in a computer program to obtain all the necessary thermal performance details for developing design specifications for a plate-fin steam condenser. 20 refs., 5 figs., 5 tabs.

  8. SALE: a simplified ALE computer program for fluid flow at all speeds

    SciTech Connect (OSTI)

    Amsden, A.A.; Ruppel, H.M.; Hirt, C.W.

    1980-06-01

    A simplified numerical fluid-dynamics computing technique is presented for calculating two-dimensional fluid flows at all speeds. It combines an implicit treatment of the pressure equation similar to that in the Implicit Continuous-fluid Eulerian (ICE) technique with the grid rezoning philosophy of the Arbitrary Lagrangian-Eulerian (ALE) method. As a result, it can handle flow speeds from supersonic to the incompressible limit in a grid that may be moved with the fluid in typical Lagrangian fashion, or held fixed in an Eulerian manner, or moved in some arbitrary way to give a continuous rezoning capability. The report describes the combined (ICEd-ALE) technique in the framework of the SALE (Simplified ALE) computer program, for which a general flow diagram and complete FORTRAN listing are included. A set of sample problems show how to use or modify the basic code for a variety of applications. Numerical listings are provided for a sample problem run with the SALE program.

  9. NASTRAN-based computer program for structural dynamic analysis of horizontal axis wind turbines

    SciTech Connect (OSTI)

    Lobitz, D.W.

    1984-01-01

    This paper describes a computer program developed for structural dynamic analysis of horizontal axis wind turbines (HAWTs). It is based on the finite element method through its reliance on NASTRAN for the development of mass, stiffness, and damping matrices of the tower and rotor, which are treated in NASTRAN as separate structures. The tower is modeled in a stationary frame and the rotor in one rotating at a constant angular velocity. The two structures are subsequently joined together (external to NASTRAN) using a time-dependent transformation consistent with the hub configuration. Aerodynamic loads are computed with an established flow model based on strip theory. Aeroelastic effects are included by incorporating the local velocity and twisting deformation of the blade in the load computation. The turbulent nature of the wind, both in space and time, is modeled by adding in stochastic wind increments. The resulting equations of motion are solved in the time domain using the implicit Newmark-Beta integrator. Preliminary comparisons with data from the Boeing/NASA MOD2 HAWT indicate that the code is capable of accurately and efficiently predicting the response of HAWTs driven by turbulent winds.

  10. Computational Analysis of an Evolutionarily Conserved VertebrateMuscle Alternative Splicing Program

    SciTech Connect (OSTI)

    Das, Debopriya; Clark, Tyson A.; Schweitzer, Anthony; Marr,Henry; Yamamoto, Miki L.; Parra, Marilyn K.; Arribere, Josh; Minovitsky,Simon; Dubchak, Inna; Blume, John E.; Conboy, John G.

    2006-06-15

    A novel exon microarray format that probes gene expression with single exon resolution was employed to elucidate critical features of a vertebrate muscle alternative splicing program. A dataset of 56 microarray-defined, muscle-enriched exons and their flanking introns were examined computationally in order to investigate coordination of the muscle splicing program. Candidate intron regulatory motifs were required to meet several stringent criteria: significant over-representation near muscle-enriched exons, correlation with muscle expression, and phylogenetic conservation among genomes of several vertebrate orders. Three classes of regulatory motifs were identified in the proximal downstream intron, within 200nt of the target exons: UGCAUG, a specific binding site for Fox-1 related splicing factors; ACUAAC, a novel branchpoint-like element; and UG-/UGC-rich elements characteristic of binding sites for CELF splicing factors. UGCAUG was remarkably enriched, being present in nearly one-half of all cases. These studies suggest that Fox and CELF splicing factors play a major role in enforcing the muscle-specific alternative splicing program, facilitating expression of a set of unique isoforms of cytoskeletal proteins that are critical to muscle cell differentiation. Supplementary materials: There are four supplementary tables and one supplementary figure. The tables provide additional detailed information concerning the muscle-enriched datasets, and about over-represented oligonucleotide sequences in the flanking introns. The supplementary figure shows RT-PCR data confirming the muscle-enriched expression of exons predicted from the microarray analysis.

  11. LIAR -- A computer program for the modeling and simulation of high performance linacs

    SciTech Connect (OSTI)

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm.

  12. Radiological Safety Analysis Computer (RSAC) Program Version 7.2 Users’ Manual

    SciTech Connect (OSTI)

    Dr. Bradley J Schrader

    2010-10-01

    The Radiological Safety Analysis Computer (RSAC) Program Version 7.2 (RSAC-7) is the newest version of the RSAC legacy code. It calculates the consequences of a release of radionuclides to the atmosphere. A user can generate a fission product inventory from either reactor operating history or a nuclear criticality event. RSAC-7 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates the decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated for inhalation, air immersion, ground surface, ingestion, and cloud gamma pathways. RSAC-7 can be used as a tool to evaluate accident conditions in emergency response scenarios, radiological sabotage events and to evaluate safety basis accident consequences. This users’ manual contains the mathematical models and operating instructions for RSAC-7. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-7. This program was designed for users who are familiar with radiological dose assessment methods.

  13. Radiological Safety Analysis Computer (RSAC) Program Version 7.0 Users’ Manual

    SciTech Connect (OSTI)

    Dr. Bradley J Schrader

    2009-03-01

    The Radiological Safety Analysis Computer (RSAC) Program Version 7.0 (RSAC-7) is the newest version of the RSAC legacy code. It calculates the consequences of a release of radionuclides to the atmosphere. A user can generate a fission product inventory from either reactor operating history or a nuclear criticality event. RSAC-7 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates the decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated for inhalation, air immersion, ground surface, ingestion, and cloud gamma pathways. RSAC-7 can be used as a tool to evaluate accident conditions in emergency response scenarios, radiological sabotage events and to evaluate safety basis accident consequences. This users’ manual contains the mathematical models and operating instructions for RSAC-7. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-7. This program was designed for users who are familiar with radiological dose assessment methods.

  14. Advanced Scientific Computing Research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, ... The DOE Office of Science's Advanced Scientific Computing Research (ASCR) program ...

  15. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. ! Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13 Garching bei München, Germany, July 17, 2007 ScicomP 13, July 17, 2007, Garching Overview * About Bassi * Memory on Bassi * Large Page Memory (It's Great!) * System Configuration * Large Page

  16. CriTi-CAL: A computer program for Critical Coiled Tubing Calculations

    SciTech Connect (OSTI)

    He, X.

    1995-12-31

    A computer software package for simulating coiled tubing operations has been developed at Rogaland Research. The software is named CriTiCAL, for Critical Coiled Tubing Calculations. It is a PC program running under Microsoft Windows. CriTi-CAL is designed for predicting force, stress, torque, lockup, circulation pressure losses and along-hole-depth corrections for coiled tubing workover and drilling operations. CriTi-CAL features an user-friendly interface, integrated work string and survey editors, flexible input units and output format, on-line documentation and extensive error trapping. CriTi-CAL was developed by using a combination of Visual Basic and C. Such an approach is an effective way to quickly develop high quality small to medium size software for the oil industry. The software is based on the results of intensive experimental and theoretical studies on buckling and post-buckling of coiled tubing at Rogaland Research. The software has been validated by full-scale test results and field data.

  17. Efficiency Improvement Opportunities for Personal Computer Monitors. Implications for Market Transformation Programs

    SciTech Connect (OSTI)

    Park, Won Young; Phadke, Amol; Shah, Nihar

    2012-06-29

    Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that display efficiency will likely improve by over 40% by 2015 compared to todays technology. We evaluate the cost effectiveness of a key technology which further improves efficiency beyond this level by at least 20% and find that its adoption is cost effective. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus (USB) powered liquid crystal display (LCD) monitors and find that the current technology available and deployed in USB powered monitors has the potential to deeply reduce energy consumption by as much as 50%. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to capture global energy saving potential from PC monitors which we estimate to be 9.2 terawatt-hours [TWh] per year in 2015.

  18. CONC/11: a computer program for calculating the performance of dish-type solar thermal collectors and power systems

    SciTech Connect (OSTI)

    Jaffe, L. D.

    1984-02-15

    CONC/11 is a computer program designed for calculating the performance of dish-type solar thermal collectors and power systems. It is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. CONC/11 is written in Athena Extended Fortran (similar to Fortran 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers.

  19. Computer program for the sensitivity calculation of a CR-39 detector in a diffusion chamber for radon measurements

    SciTech Connect (OSTI)

    Nikezic, D. Stajic, J. M.; Yu, K. N.

    2014-02-15

    Computer software for calculation of the sensitivity of a CR-39 detector closed in a diffusion chamber to radon is described in this work. The software consists of two programs, both written in the standard Fortran 90 programming language. The physical background and a numerical example are given. Presented software is intended for numerous researches in radon measurement community. Previously published computer programs TRACK-TEST.F90 and TRACK-VISION.F90 [D. Nikezic and K. N. Yu, Comput. Phys. Commun. 174, 160 (2006); D. Nikezic and K. N. Yu, Comput. Phys. Commun. 178, 591 (2008)] are used here as subroutines to calculate the track parameters and to determine whether the track is visible or not, based on the incident angle, impact energy, etching conditions, gray level, and visibility criterion. The results obtained by the software, using five different V functions, were compared with the experimental data found in the literature. Application of two functions in this software reproduced experimental data very well, while other three gave lower sensitivity than experiment.

  20. Programs for attracting under-represented minority students to graduate school and research careers in computational science. Final report for period October 1, 1995 - September 30, 1997

    SciTech Connect (OSTI)

    Turner, James C. Jr.; Mason, Thomas; Guerrieri, Bruno

    1997-10-01

    Programs have been established at Florida A & M University to attract minority students to research careers in mathematics and computational science. The primary goal of the program was to increase the number of such students studying computational science via an interactive multimedia learning environment One mechanism used for meeting this goal was the development of educational modules. This academic year program established within the mathematics department at Florida A&M University, introduced students to computational science projects using high-performance computers. Additional activities were conducted during the summer, these included workshops, meetings, and lectures. Through the exposure provided by this program to scientific ideas and research in computational science, it is likely that their successful applications of tools from this interdisciplinary field will be high.

    1. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information From here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop. Kerberos. AFS. Printing. Recommended applications for various common tasks. Running CPU- or IO-intensive programs (batch jobs) Commonly encountered problems Computing support within BooNE Bringing a computer to FNAL, or purchasing a new one. Laptops. The Computer Security Program Plan for MiniBooNE The

    2. Fourth SIAM conference on mathematical and computational issues in the geosciences: Final program and abstracts

      SciTech Connect (OSTI)

      1997-12-31

      The conference focused on computational and modeling issues in the geosciences. Of the geosciences, problems associated with phenomena occurring in the earth`s subsurface were best represented. Topics in this area included petroleum recovery, ground water contamination and remediation, seismic imaging, parameter estimation, upscaling, geostatistical heterogeneity, reservoir and aquifer characterization, optimal well placement and pumping strategies, and geochemistry. Additional sessions were devoted to the atmosphere, surface water and oceans. The central mathematical themes included computational algorithms and numerical analysis, parallel computing, mathematical analysis of partial differential equations, statistical and stochastic methods, optimization, inversion, homogenization and renormalization. The problem areas discussed at this conference are of considerable national importance, with the increasing importance of environmental issues, global change, remediation of waste sites, declining domestic energy sources and an increasing reliance on producing the most out of established oil reservoirs.

    3. THERM3D -- A boundary element computer program for transient heat conduction problems

      SciTech Connect (OSTI)

      Ingber, M.S.

      1994-02-01

      The computer code THERM3D implements the direct boundary element method (BEM) to solve transient heat conduction problems in arbitrary three-dimensional domains. This particular implementation of the BEM avoids performing time-consuming domain integrations by approximating a ``generalized forcing function`` in the interior of the domain with the use of radial basis functions. An approximate particular solution is then constructed, and the original problem is transformed into a sequence of Laplace problems. The code is capable of handling a large variety of boundary conditions including isothermal, specified flux, convection, radiation, and combined convection and radiation conditions. The computer code is benchmarked by comparisons with analytic and finite element results.

    4. Energy Department's High Performance Computing for Manufacturing Program Seeks to Fund New Industry Proposals

      Broader source: Energy.gov [DOE]

      The U.S. Department of Energy (DOE) is seeking concept proposals from qualified U.S. manufacturers to participate in short-term, collaborative projects. Selectees will be given access to High Performance Computing facilities and will work with experienced DOE National Laboratories staff in addressing challenges in U.S. manufacturing.

    5. Opportunities for Russian Nuclear Weapons Institute developing computer-aided design programs for pharmaceutical drug discovery. Final report

      SciTech Connect (OSTI)

      1996-09-23

      The goal of this study is to determine whether physicists at the Russian Nuclear Weapons Institute can profitably service the need for computer aided drug design (CADD) programs. The Russian physicists` primary competitive advantage is their ability to write particularly efficient code able to work with limited computing power; a history of working with very large, complex modeling systems; an extensive knowledge of physics and mathematics, and price competitiveness. Their primary competitive disadvantage is their lack of biology, and cultural and geographic issues. The first phase of the study focused on defining the competitive landscape, primarily through interviews with and literature searches on the key providers of CADD software. The second phase focused on users of CADD technology to determine deficiencies in the current product offerings, to understand what product they most desired, and to define the potential demand for such a product.

    6. User's manual for RATEPAC: a digital-computer program for revenue requirements and rate-impact analysis

      SciTech Connect (OSTI)

      Fuller, L.C.

      1981-09-01

      The RATEPAC computer program is designed to model the financial aspects of an electric power plant or other investment requiring capital outlays and having annual operating expenses. The program produces incremental pro forma financial statements showing how an investment will affect the overall financial statements of a business entity. The code accepts parameters required to determine capital investment and expense as a function of time and sums these to determine minimum revenue requirements (cost of service). The code also calculates present worth of revenue requirements and required return on rate base. This user's manual includes a general description of the code as well as the instructions for input data preparation. A complete example case is appended.

    7. Eighth SIAM conference on parallel processing for scientific computing: Final program and abstracts

      SciTech Connect (OSTI)

      1997-12-31

      This SIAM conference is the premier forum for developments in parallel numerical algorithms, a field that has seen very lively and fruitful developments over the past decade, and whose health is still robust. Themes for this conference were: combinatorial optimization; data-parallel languages; large-scale parallel applications; message-passing; molecular modeling; parallel I/O; parallel libraries; parallel software tools; parallel compilers; particle simulations; problem-solving environments; and sparse matrix computations.

    8. MPSalsa a finite element computer program for reacting flow problems. Part 2 - user`s guide

      SciTech Connect (OSTI)

      Salinger, A.; Devine, K.; Hennigan, G.; Moffat, H.

      1996-09-01

      This manual describes the use of MPSalsa, an unstructured finite element (FE) code for solving chemically reacting flow problems on massively parallel computers. MPSalsa has been written to enable the rigorous modeling of the complex geometry and physics found in engineering systems that exhibit coupled fluid flow, heat transfer, mass transfer, and detailed reactions. In addition, considerable effort has been made to ensure that the code makes efficient use of the computational resources of massively parallel (MP), distributed memory architectures in a way that is nearly transparent to the user. The result is the ability to simultaneously model both three-dimensional geometries and flow as well as detailed reaction chemistry in a timely manner on MT computers, an ability we believe to be unique. MPSalsa has been designed to allow the experienced researcher considerable flexibility in modeling a system. Any combination of the momentum equations, energy balance, and an arbitrary number of species mass balances can be solved. The physical and transport properties can be specified as constants, as functions, or taken from the Chemkin library and associated database. Any of the standard set of boundary conditions and source terms can be adapted by writing user functions, for which templates and examples exist.

    9. MILDOS - A Computer Program for Calculating Environmental Radiation Doses from Uranium Recovery Operations

      SciTech Connect (OSTI)

      Strange, D. L.; Bander, T. J.

      1981-04-01

      The MILDOS Computer Code estimates impacts from radioactive emissions from uranium milling facilities. These impacts are presented as dose commitments to individuals and the regional population within an 80 km radius of the facility. Only airborne releases of radioactive materials are considered: releases to surface water and to groundwater are not addressed in MILDOS. This code is multi-purposed and can be used to evaluate population doses for NEPA assessments, maximum individual doses for predictive 40 CFR 190 compliance evaluations, or maximum offsite air concentrations for predictive evaluations of 10 CFR 20 compliance. Emissions of radioactive materials from fixed point source locations and from area sources are modeled using a sector-averaged Gaussian plume dispersion model, which utilizes user-provided wind frequency data. Mechanisms such as deposition of particulates, resuspension. radioactive decay and ingrowth of daughter radionuclides are included in the transport model. Annual average air concentrations are computed, from which subsequent impacts to humans through various pathways are computed. Ground surface concentrations are estimated from deposition buildup and ingrowth of radioactive daughters. The surface concentrations are modified by radioactive decay, weathering and other environmental processes. The MILDOS Computer Code allows the user to vary the emission sources as a step function of time by adjustinq the emission rates. which includes shutting them off completely. Thus the results of a computer run can be made to reflect changing processes throughout the facility's operational lifetime. The pathways considered for individual dose commitments and for population impacts are: • Inhalation • External exposure from ground concentrations • External exposure from cloud immersion • Ingestioo of vegetables • Ingestion of meat • Ingestion of milk • Dose commitments are calculated using dose conversion factors, which are ultimately based

    10. Development of computer program ENMASK for prediction of residual environmental masking-noise spectra, from any three independent environmental parameters

      SciTech Connect (OSTI)

      Chang, Y.-S.; Liebich, R. E.; Chun, K. C.

      2000-03-31

      Residual environmental sound can mask intrusive4 (unwanted) sound. It is a factor that can affect noise impacts and must be considered both in noise-impact studies and in noise-mitigation designs. Models for quantitative prediction of sensation level (audibility) and psychological effects of intrusive noise require an input with 1/3 octave-band spectral resolution of environmental masking noise. However, the majority of published residual environmental masking-noise data are given with either octave-band frequency resolution or only single A-weighted decibel values. A model has been developed that enables estimation of 1/3 octave-band residual environmental masking-noise spectra and relates certain environmental parameters to A-weighted sound level. This model provides a correlation among three environmental conditions: measured residual A-weighted sound-pressure level, proximity to a major roadway, and population density. Cited field-study data were used to compute the most probable 1/3 octave-band sound-pressure spectrum corresponding to any selected one of these three inputs. In turn, such spectra can be used as an input to models for prediction of noise impacts. This paper discusses specific algorithms included in the newly developed computer program ENMASK. In addition, the relative audibility of the environmental masking-noise spectra at different A-weighted sound levels is discussed, which is determined by using the methodology of program ENAUDIBL.

    11. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    12. Ocean-ice/oil-weathering computer program user's manual. Final report

      SciTech Connect (OSTI)

      Kirstein, B.E.; Redding, R.T.

      1987-10-01

      The ocean-ice/oil-weathering code is written in FORTRAN as a series of stand-alone subroutines that can easily be installed on most any computer. All of the trial-and-error routines, integration routines, and other special routines are written in the code so that nothing more than the normal system functions such as EXP are required. The code is user-interactive and requests input by prompting questions with suggested input. Therefore, the user can actually learn about the nature of crude oil and oil weathering by using this code. The ocean-ice oil-weathering model considers the following weathering processes: evaporation; dispersion (oil into water); moussee (water into oil); and spreading; These processes are used to predict the mass balance and composition of oil remaining in the slick as a function of time and environmental parameters.

    13. A Computer Program for Processing In Situ Permeable Flow Sensor Data

      Energy Science and Technology Software Center (OSTI)

      1996-04-15

      FLOW4.02 is used to interpret data from In Situ Permeable Flow Sensors which are instruments that directly measure groundwater flow velocity in saturated, unconsolidated geologic formations (Ballard, 1994, 1996: Ballard et al., 1994: Ballard et al., in press). The program accepts as input the electrical resistance measurements from the thermistors incorporated within the flow sensors, converts the resistance data to temperatures and then uses the temperature information to calculate the groundwater flow velocity and associatedmore » uncertainty. The software includes many capabilities for manipulating, graphically displaying and writing to disk the raw resistance data, the temperature data and the calculated flow velocity information. This version is a major revision of a previously copyrighted version (FLOW1.0).« less

    14. Load determination for long cable bolt support using computer aided bolt load estimation (CABLE) program

      SciTech Connect (OSTI)

      Bawden, W.F.; Moosavi, M.; Hyett, A.J.

      1996-12-01

      In this paper a numerical formulation is presented for determination of the axial load along a cable bolt for a prescribed distribution of rock mass displacement. Results using the program CABLE indicate that during excavation, the load distribution that develops along an untensioned fully grouted cable bolt depends on three main factors: (i) the properties of the cable itself, (ii) the shear force that develops due to bond at the cable-grout interface (i.e. bond stiffness), and (iii) the distribution of rock mass displacement along the cable bolt length. in general, the effect of low modulus rock and mining induced stress decreases in reducing bond strength as determined from short embedment length tests, is reflected in the development of axial loads significantly less than the ultimate tensile capacity even for long cable bolts. However, the load distribution is also dependent on the deformation distribution in the reinforced rock mass. Higher cable bolt loads will be developed for a rock mass that behaves as a discontinuum, with deformation concentrated on a few fractures, than for one which behaves as a continuum, either due to a total lack of fractures or a very high fracture density. This result suggests that the stiffness of a fully grouted cable bolt is not simply a characteristic of the bolt and grout used, but also of the deformation behavior of the ground. In other words, the same combination of bolt and grout will be stiffer if the rock behaves as a discontinuum than if it behaves as a continuum. This paper also explains the laboratory test program used to determine the constitutive behavior of the Garford bulb and Nutcase cables bolts. Details of the test setup as well as the obtained results are summarized and discussed.

    15. TRUST: A Computer Program for Variably Saturated Flow in Multidimensional, Deformable Media

      SciTech Connect (OSTI)

      Reisenauer, A. E.; Key, K. T.; Narasimhan, T. N.; Nelson, R. W.

      1982-01-01

      The computer code, TRUST. provides a versatile tool to solve a wide spectrum of fluid flow problems arising in variably saturated deformable porous media. The governing equations express the conservation of fluid mass in an elemental volume that has a constant volume of solid. Deformation of the skeleton may be nonelastic. Permeability and compressibility coefficients may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may include hysteresis. The code developed by T. N. Narasimhan grew out of the original TRUNP code written by A. L. Edwards. The code uses an integrated finite difference algorithm for numerically solving the governing equation. Narching in time is performed by a mixed explicit-implicit numerical procedure in which the time step is internally controlled. The time step control and related feature in the TRUST code provide an effective control of the potential numerical instabilities that can arise in the course of solving this difficult class of nonlinear boundary value problem. This document brings together the equations, theory, and users manual for the code as well as a sample case with input and output.

    16. User`s manual for EROSION/MOD1: A computer program for fluids-solids erosion

      SciTech Connect (OSTI)

      Lyczkowski, R.W.; Bouillard, J.X.; Folga, S.M.; Chang, S.L.

      1992-09-01

      This report describes EROSION/MOD1, a computer program that was developed as a two-dimensional analytical tool for the general analysis of erosion in fluid-solids systems and the specific analysis of erosion in bubbling fluidized-bed combustors. Contained herein are implementations of Finnie`s impaction erosion model, Neilson and Gilchrist`s combined ductile and brittle erosion model, and several forms of the monolayer energy dissipation erosion model. These models and their implementations are described briefly. The global structure of EROSION/MOD1 that contains these models is also discussed. The input data for EROSION/MOD1 are given, and a sample problem for a fluidized bed is described. The hydrodynamic input data are assumed to come from the output of FLUFIX/MOD2.

    17. MASBAL: A computer program for predicting the composition of nuclear waste glass produced by a slurry-fed ceramic melter

      SciTech Connect (OSTI)

      Reimus, P.W.

      1987-07-01

      This report is a user's manual for the MASBAL computer program. MASBAL's objectives are to predict the composition of nuclear waste glass produced by a slurry-fed ceramic melter based on a knowledge of process conditions; to generate simulated data that can be used to estimate the uncertainty in the predicted glass composition as a function of process uncertainties; and to generate simulated data that can be used to provide a measure of the inherent variability in the glass composition as a function of the inherent variability in the feed composition. These three capabilities are important to nuclear waste glass producers because there are constraints on the range of compositions that can be processed in a ceramic melter and on the range of compositions that will be acceptable for disposal in a geologic repository. MASBAL was developed specifically to simulate the operation of the West Valley Component Test system, a commercial-scale ceramic melter system that will process high-level nuclear wastes currently stored in underground tanks at the site of the Western New York Nuclear Services Center (near West Valley, New York). The program is flexible enough, however, to simulate any slurry-fed ceramic melter system. 4 refs., 16 figs., 5 tabs.

    18. Center for Programming Models for Scalable Parallel Computing - Towards Enhancing OpenMP for Manycore and Heterogeneous Nodes

      SciTech Connect (OSTI)

      Barbara Chapman

      2012-02-01

      OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close to DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.

    19. A user`s guide to LUGSAN II. A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

      SciTech Connect (OSTI)

      Dunn, W.N.

      1998-03-01

      LUG and Sway brace ANalysis (LUGSAN) II is an analysis and database computer program that is designed to calculate store lug and sway brace loads for aircraft captive carriage. LUGSAN II combines the rigid body dynamics code, SWAY85, with a Macintosh Hypercard database to function both as an analysis and archival system. This report describes the LUGSAN II application program, which operates on the Macintosh System (Hypercard 2.2 or later) and includes function descriptions, layout examples, and sample sessions. Although this report is primarily a user`s manual, a brief overview of the LUGSAN II computer code is included with suggested resources for programmers.

    20. Programming

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Programming Programming Compiling and linking programs on Euclid. Compiling Codes How to compile and link MPI codes on Euclid. Read More » Using the ACML Math Library How to compile and link a code with the ACML library and include the $ACML environment variable. Read More » Process Limits The hard and soft process limits are listed. Read More » Last edited: 2016-04-29 11:35:11

    1. Introduction to Radcalc: A computer program to calculate the radiolytic production of hydrogen gas from radioactive wastes in packages

      SciTech Connect (OSTI)

      Green, J.R.; Hillesland, K.E.; Field, J.G.

      1995-04-01

      A calculational technique for quantifying the concentration of hydrogen generated by radiolysis in sealed radioactive waste containers was developed in a U.S. Department of Energy (DOE) study conducted by EG&G Idaho, Inc., and the Electric Power Research Institute (EPRI) TMI-2 Technology Transfer Office. The study resulted in report GEND-041, entitled {open_quotes}A Calculational Technique to Predict Combustible Gas Generation in Sealed Radioactive Waste Containers{close_quotes}. The study also resulted in a presentation to the U.S. Nuclear Regulatory Commission (NRC) which gained acceptance of the methodology for use in ensuring compliance with NRC IE Information Notice No. 84-72 (NRC 1984) concerning the generation of hydrogen within packages. NRC IE Information Notice No. 84-72: {open_quotes}Clarification of Conditions for Waste Shipments Subject to Hydrogen Gas Generation{close_quotes} applies to any package containing water and/or organic substances that could radiolytically generate combustible gases. EPRI developed a simple computer program in a spreadsheet format utilizing GEND-041 calculational methodology to predict hydrogen gas concentrations in low-level radioactive wastes containers termed Radcalc. The computer code was extensively benchmarked against TMI-2 (Three Mile Island) EPICOR II resin bed measurements. The benchmarking showed that the model developed predicted hydrogen gas concentrations within 20% of the measured concentrations. Radcalc for Windows was developed using the same calculational methodology. The code is written in Microsoft Visual C++ 2.0 and includes a Microsoft Windows compatible menu-driven front end. In addition to hydrogen gas concentration calculations, Radcalc for Windows also provides transportation and packaging information such as pressure buildup, total activity, decay heat, fissile activity, TRU activity, and transportation classifications.

    2. Programming

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Programming Programming The genepool system has a diverse set of software development tools and a rich environment for delivering their functionality to users. Genepool has adopted a modular system which has been adapted from the Programming Environments similar to those provided on the Cray systems at NERSC. The Programming Environment is managed by a meta-module named similar to "PrgEnv-gnu/4.6". The "gnu" indicates that it is providing the GNU environment, principally GCC,

    3. Programming

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Read More Programming Tuning Options Tips for tuning performance on the Hopper system ... The ACML library is also supported on Hopper and Franklin. Read More PGAS Language ...

    4. Programming

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Storage & File Systems Application Performance Data & Analytics Job Logs & Statistics ... Each programming environment contains the full set of compatible compilers and libraries. ...

    5. Light Water Reactor Sustainability Program: Computer-based procedure for field activities: results from three evaluations at nuclear power plants

      SciTech Connect (OSTI)

      Oxstrand, Johanna; Bly, Aaron; LeBlanc, Katya

      2014-09-01

      Nearly all activities that involve human interaction with the systems of a nuclear power plant are guided by procedures. The paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety; however, improving procedure use could yield tremendous savings in increased efficiency and safety. One potential way to improve procedure-based activities is through the use of computer-based procedures (CBPs). Computer-based procedures provide the opportunity to incorporate context driven job aids, such as drawings, photos, just-in-time training, etc into CBP system. One obvious advantage of this capability is reducing the time spent tracking down the applicable documentation. Additionally, human performance tools can be integrated in the CBP system in such way that helps the worker focus on the task rather than the tools. Some tools can be completely incorporated into the CBP system, such as pre-job briefs, placekeeping, correct component verification, and peer checks. Other tools can be partly integrated in a fashion that reduces the time and labor required, such as concurrent and independent verification. Another benefit of CBPs compared to PBPs is dynamic procedure presentation. PBPs are static documents which limits the degree to which the information presented can be tailored to the task and conditions when the procedure is executed. The CBP system could be configured to display only the relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the user down the path of relevant steps based on the current conditions. This feature will reduce the user’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. As part of the Department of Energy’s (DOE) Light Water Reactors Sustainability Program

    6. PADLOC: a one-dimensional computer program for calculating coolant and plateout fission-product concentrations. Part 2

      SciTech Connect (OSTI)

      Hudritsch, W.W.

      1981-09-01

      The behavior of some of the prominent fission products along their convection pathways is dominated by the interaction of other species with them. This gave rise to the development of a plateout code capable of analyzing coupled species effects. The single species plateout computer program PADLOC is described in Part I of this report. The present Part II is concerned with the extension of PADLOC to MULTI*PADLOK, a multiple species version of PADLOC. MULTI*PADLOC is designed to analyze the time and one-dimensional spatial dependence of the concentrations of interacting (fission product) species in the carrier gas and on the surrounding wall surfaces on an arbitrary network of flow channels. The problem solved is one of mass transport of several impurity spceis in a gas, including the effects of sources in the gas and on the surface, convection along the flow paths, decay interaction, sorption interaction on the wall surfaces, and chemical reaction interactions in the gas and on the surfaces. These phenomena are governed by a system of coupled, nonlinear partial differential equations. The solution is achieved by: (a) linearizing the equations about an approximate solution and employing a Newton-Raphson iteration technique, (b) employing a finite difference solution method with an implicit time integration, and (c) employing a substructuring technique to logically organize the systems of equations for an abitrary flow network.

    7. Advanced Scientific Computing Research (ASCR)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... ASCR's programs have helped establish computation as a third pillar of science along with theory and physical experiments. Sandia has extensive ASCR programs in Computer Science ...

    8. Programming

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      using MPI and OpenMP on NERSC systems, the same does not always exist for other supported parallel programming models such as UPC or Chapel. At the same time, we know that these...

    9. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    10. Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Workshop Information: Material properties are determined by their structures or atomic arrangements. Three themes are emerging that offer unprecedented opportunities in static and transient material research and discoveries in the coming decade: high-energy X-ray free electron lasers (XFELs), high-performance imaging detector technology, and exascale computing. In structure determination, XFEL plays the role of information generation, imaging detectors the role of information collection, and

    11. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      DesignForward FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Exascale Computing Exascale Computing Moving forward into the exascale era, NERSC users place will place increased demands on NERSC computational facilities. Users will be facing increased complexity in the memory subsystem and node architecture. System designs and programming models will have to evolve to face these new challenges. NERSC staff are active in current initiatives addressing

    12. Computer program for predicting surface subsidence resulting from pressure depletion in geopressured wells: subsidence prediction for the DOW test well No. 1, Parcperdue, Louisiana

      SciTech Connect (OSTI)

      Janssen, J.C.; Carver, D.R.; Bebout, D.G.; Bachman, A.L.

      1981-01-01

      The nucleus-of-strain concept is used to construct a computer program for predicting surface subsidence due to pressure reduction in geopressured reservoirs. Numerical integration allows one to compute the vertical displacement of the ground surface directly above and beyond the aquifer boundaries which results from the pressure reduction in each of the small finite volumes into which the aquifer is partitioned. The program treats depth (measured from the surface to the mean thickness of the aquifer) as a constant. Variation in aquifer thickness is accounted for by linear interpolation from one boundary to its opposite. In this simple model, subsidence is proportional to the pressure reduction (considered constant in this presentation) and to but one physical parameter, Cm(1-..nu..), in which Cm is its coefficient of uniaxial compaction, and ..nu.. is Poisson's ratio.

    13. SU-E-T-596: P3DVHStats - a Novel, Automatic, Institution Customizable Program to Compute and Report DVH Quantities On Philips Pinnacle TPS

      SciTech Connect (OSTI)

      Wu, C

      2015-06-15

      Purpose: To implement a novel, automatic, institutional customizable DVH quantities evaluation and PDF report tool on Philips Pinnacle treatment planning system (TPS) Methods: An add-on program (P3DVHStats) is developed by us to enable automatic DVH quantities evaluation (including both volume and dose based quantities, such as V98, V100, D2), and automatic PDF format report generation, for EMR convenience. The implementation is based on a combination of Philips Pinnacle scripting tool and Java language pre-installed on each Pinnacle Sun Solaris workstation. A single Pinnacle script provide user a convenient access to the program when needed. The activated script will first export DVH data for user selected ROIs from current Pinnacle plan trial; a Java program then provides a simple GUI interface, utilizes the data to compute any user requested DVH quantities, compare with preset institutional DVH planning goals; if accepted by users, the program will also generate a PDF report of the results and export it from Pinnacle to EMR import folder via FTP. Results: The program was tested thoroughly and has been released for clinical use at our institution (Pinnacle Enterprise server with both thin clients and P3PC access), for all dosimetry and physics staff, with excellent feedback. It used to take a few minutes to use MS-Excel worksheet to calculate these DVH quantities for IMRT/VMAT plans, and manually save them as PDF report; with the new program, it literally takes a few mouse clicks in less than 30 seconds to complete the same tasks. Conclusion: A Pinnacle scripting and Java language based program is successfully implemented, customized to our institutional needs. It is shown to dramatically reduce time and effort needed for DVH quantities computing and EMR reporting.

    14. Mira Computational Readiness Assessment | Argonne Leadership...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ...

    15. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Exascale Computing CoDEx Project: A Hardware/Software Codesign Environment for the Exascale Era The next decade will see a rapid evolution of HPC node architectures as power and cooling constraints are limiting increases in microprocessor clock speeds and constraining data movement. Applications and algorithms will need to change and adapt as node architectures evolve. A key element of the strategy as we move forward is the co-design of applications, architectures and programming

    16. Estimating pressurized water reactor decommissioning costs: A user`s manual for the PWR Cost Estimating Computer Program (CECP) software. Draft report for comment

      SciTech Connect (OSTI)

      Bierschbach, M.C.; Mencinsky, G.J.

      1993-10-01

      With the issuance of the Decommissioning Rule (July 27, 1988), nuclear power plant licensees are required to submit to the US Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. This user`s manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personnel computer, provides estimates for the cost of decommissioning PWR plant stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning.

    17. Estimating boiling water reactor decommissioning costs: A user`s manual for the BWR Cost Estimating Computer Program (CECP) software. Final report

      SciTech Connect (OSTI)

      Bierschbach, M.C.

      1996-06-01

      Nuclear power plant licensees are required to submit to the US Nuclear Regulatory Commission (NRC) for review their decommissioning cost estimates. This user`s manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personal computer, provides estimates for the cost of decommissioning boiling water reactor (BWR) power stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning.

    18. Estimating boiling water reactor decommissioning costs. A user`s manual for the BWR Cost Estimating Computer Program (CECP) software: Draft report for comment

      SciTech Connect (OSTI)

      Bierschbach, M.C.

      1994-12-01

      With the issuance of the Decommissioning Rule (July 27, 1988), nuclear power plant licensees are required to submit to the U.S. Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. This user`s manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personal computer, provides estimates for the cost of decommissioning BWR power stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning.

    19. DoE Early Career Research Program: Final Report: Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics

      SciTech Connect (OSTI)

      Farbin, Amir

      2015-07-15

      This is the final report of for DoE Early Career Research Program Grant Titled "Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics".

    20. Thermoelectric Materials by Design, Computational Theory and...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      by Design, Computational Theory and Structure Thermoelectric Materials by Design, Computational Theory and Structure 2009 DOE Hydrogen Program and Vehicle Technologies Program...

    1. Report from the Committee of Visitors on its Review of the Processes and Procedures used to Manage the Theory and Computations Program, Fusion Energy Sciences Advisory Committee

      SciTech Connect (OSTI)

      none,

      2004-03-01

      A Committee of Visitors (COV) was formed to review the procedures used by the Office of Fusion Energy Sciences to manage its Theory and Computations program. The COV was pleased to conclude that the research portfolio supported by the OFES Theory and Computations Program was of very high quality. The Program supports research programs at universities, research industries, and national laboratories that are well regarded internationally and address questions of high relevance to the DOE. A major change in the management of the Theory and Computations program over the past few years has been the introduction of a system of comparative peer review to guide the OFES Theory Team in selecting proposals for funding. The COV was impressed with the success of OFES in its implementation of comparative peer review and with the quality of the reviewers chosen by the OFES Theory Team. The COV concluded that the competitive peer review process has improved steadily over the three years that it has been in effect and that it has improved both the fairness and accountability of the proposal review process. While the COV commends OFES in its implementation of comparative review, the COV offers the following recommendations in the hope that they will further improve the comparative peer review process: The OFES should improve the consistency of peer reviews. We recommend adoption of a “results-oriented” scoring system in their guidelines to referees (see Appendix II), a greater use of review panels, and a standard format for proposals; The OFES should further improve the procedures and documentation for proposal handling. We recommend that the “folders” documenting funding decisions contain all the input from all of the reviewers, that OFES document their rationale for funding decisions which are at variance with the recommendation of the peer reviewers, and that OFES provide a Summary Sheet within each folder; The OFES should better communicate the procedures used to

    2. Paul C. Messina | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      He led the Computational and Computer Science component of Caltech's research project funded by the Academic Strategic Alliances Program of the Accelerated Strategic Computing ...

    3. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    4. TURTLE with MAD input (Trace Unlimited Rays Through Lumped Elements) -- A computer program for simulating charged particle beam transport systems and DECAY TURTLE including decay calculations

      SciTech Connect (OSTI)

      Carey, D.C.

      1999-12-09

      TURTLE is a computer program useful for determining many characteristics of a particle beam once an initial design has been achieved, Charged particle beams are usually designed by adjusting various beam line parameters to obtain desired values of certain elements of a transfer or beam matrix. Such beam line parameters may describe certain magnetic fields and their gradients, lengths and shapes of magnets, spacings between magnetic elements, or the initial beam accepted into the system. For such purposes one typically employs a matrix multiplication and fitting program such as TRANSPORT. TURTLE is designed to be used after TRANSPORT. For convenience of the user, the input formats of the two programs have been made compatible. The use of TURTLE should be restricted to beams with small phase space. The lumped element approximation, described below, precludes the inclusion of the effect of conventional local geometric aberrations (due to large phase space) or fourth and higher order. A reading of the discussion below will indicate clearly the exact uses and limitations of the approach taken in TURTLE.

    5. Supercomputing Challenge Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      program that teaches mid school and high school students how to use powerful computers to model real-world problems and to explore computational approaches to their...

    6. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    7. Final Report, Center for Programming Models for Scalable Parallel Computing: Co-Array Fortran, Grant Number DE-FC02-01ER25505

      SciTech Connect (OSTI)

      Robert W. Numrich

      2008-04-22

      The major accomplishment of this project is the production of CafLib, an 'object-oriented' parallel numerical library written in Co-Array Fortran. CafLib contains distributed objects such as block vectors and block matrices along with procedures, attached to each object, that perform basic linear algebra operations such as matrix multiplication, matrix transpose and LU decomposition. It also contains constructors and destructors for each object that hide the details of data decomposition from the programmer, and it contains collective operations that allow the programmer to calculate global reductions, such as global sums, global minima and global maxima, as well as vector and matrix norms of several kinds. CafLib is designed to be extensible in such a way that programmers can define distributed grid and field objects, based on vector and matrix objects from the library, for finite difference algorithms to solve partial differential equations. A very important extra benefit that resulted from the project is the inclusion of the co-array programming model in the next Fortran standard called Fortran 2008. It is the first parallel programming model ever included as a standard part of the language. Co-arrays will be a supported feature in all Fortran compilers, and the portability provided by standardization will encourage a large number of programmers to adopt it for new parallel application development. The combination of object-oriented programming in Fortran 2003 with co-arrays in Fortran 2008 provides a very powerful programming model for high-performance scientific computing. Additional benefits from the project, beyond the original goal, include a programto provide access to the co-array model through access to the Cray compiler as a resource for teaching and research. Several academics, for the first time, included the co-array model as a topic in their courses on parallel computing. A separate collaborative project with LANL and PNNL showed how to extend the

    8. Computing Frontier: Distributed Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Frontier: Distributed Computing and Facility Infrastructures Conveners: Kenneth Bloom 1 , Richard Gerber 2 1 Department of Physics and Astronomy, University of Nebraska-Lincoln 2 National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory 1.1 Introduction The field of particle physics has become increasingly reliant on large-scale computing resources to address the challenges of analyzing large datasets, completing specialized computations and

    9. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    10. 5 Checks & 5 Tips for INCITE | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ...

    11. Advanced Simulation and Computing

      National Nuclear Security Administration (NNSA)

      NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction

    12. Parallel computing works

      SciTech Connect (OSTI)

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    13. User's manual to the ICRP Code: a series of computer programs to perform dosimetric calculations for the ICRP Committee 2 report

      SciTech Connect (OSTI)

      Watson, S.B.; Ford, M.R.

      1980-02-01

      A computer code has been developed that implements the recommendations of ICRP Committee 2 for computing limits for occupational exposure of radionuclides. The purpose of this report is to describe the various modules of the computer code and to present a description of the methods and criteria used to compute the tables published in the Committee 2 report. The computer code contains three modules of which: (1) one computes specific effective energy; (2) one calculates cumulated activity; and (3) one computes dose and the series of ICRP tables. The description of the first two modules emphasizes the new ICRP Committee 2 recommendations in computing specific effective energy and cumulated activity. For the third module, the complex criteria are discussed for calculating the tables of committed dose equivalent, weighted committed dose equivalents, annual limit of intake, and derived air concentration.

    14. Computer, Computational, and Statistical Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCS Computer, Computational, and Statistical Sciences Computational physics, computer science, applied mathematics, statistics and the integration of large data streams are central ...

    15. ALCF Acknowledgment Policy | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Impact on Theory and Experiment (INCITE) program. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User ...

    16. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Deployment of Edison was made possible in part by funding from DOE's Office of Science and the DARPA High Productivity Computing Systems program. DOE's Office of Science is the ...

    17. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Students Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email 2016: Students Peter Ahrens Peter Ahrens Electrical Engineering & Computer Science BS UC Berkeley Jenniffer Estrada Jenniffer Estrada Computer Science MS Youngstown

    18. Computing and Computational Sciences Directorate - Computer Science and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics Division - Meetings and Workshops Awards Awards Night 2012 R&D LEADERSHIP, DIRECTOR LEVEL Winner: Brian Worley Organization: Computational Sciences & Engineering Division Citation: For exemplary program leadership of a successful and growing collaboration with the Department of Defense and for successfully initiating and providing oversight of a new data program with the Centers for Medicare and Medicaid Services. TECHNICAL SUPPORT Winner: Michael Matheson Organization:

    19. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      should have basic experience with a scientific computing language, such as C, C++, Fortran and with the LINUX operating system. Duration & Location The program will last ten...

    20. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Recommended Reading & Resources Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead ...

    1. Program Activities | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      The Advanced Simulation and Computing program (ASC) is part of ... Office of Defense Programs. Defense Programs has six components: Research, ... at making the scientific and ...

    2. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Exascale Computing » CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in research and development into energy efficient and effective processor and memory architectures for DOE's Exascale program. CAL coordinates hardware architecture R&D activities across the DOE. CAL is a joint NNSA/SC activity involving Sandia National Laboratories (CAL-Sandia) and

    3. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      LaboratoryNational Security Education Center Menu About Seminar Series Summer Schools Workshops Viz Collab IS&T Projects NSEC » Information Science and Technology Institute (ISTI) » Summer School Programs » Parallel Computing Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff

    4. TORO II: A finite element computer program for nonlinear quasi-static problems in electromagnetics: Part 2, User`s manual

      SciTech Connect (OSTI)

      Gartling, D.K.

      1996-05-01

      User instructions are given for the finite element, electromagnetics program, TORO II. The theoretical background and numerical methods used in the program are documented in SAND95-2472. The present document also describes a number of example problems that have been analyzed with the code and provides sample input files for typical simulations. 20 refs., 34 figs., 3 tabs.

    5. Method and system for knowledge discovery using non-linear statistical analysis and a 1st and 2nd tier computer program

      DOE Patents [OSTI]

      Hively, Lee M.

      2011-07-12

      The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.

    6. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

      SciTech Connect (OSTI)

      Zhou, Shujia; Duffy, Daniel; Clune, Thomas; Suarez, Max; Williams, Samuel; Halem, Milton

      2009-01-10

      The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratio of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.

    7. SCC: The Strategic Computing Complex

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SCC: The Strategic Computing Complex SCC: The Strategic Computing Complex The Strategic Computing Complex (SCC) is a secured supercomputing facility that supports the calculation, modeling, simulation, and visualization of complex nuclear weapons data in support of the Stockpile Stewardship Program. The 300,000-square-foot, vault-type building features an unobstructed 43,500-square-foot computer room, which is an open room about three-fourths the size of a football field. The Strategic Computing

    8. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

    9. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

      SciTech Connect (OSTI)

      Xu, Tengfang; Flapper, Joris; Ke, Jing; Kramer, Klaas; Sathaye, Jayant

      2012-02-01

      The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variables affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and water

    10. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2015-03-30 20:55:24...

    11. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    12. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Recommended Reading & Resources Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email Recommended Reading & References The Parallel Computing Summer Research Internship covers a broad range of topics that you may not have

    13. JAC3D -- A three-dimensional finite element computer program for the nonlinear quasi-static response of solids with the conjugate gradient method; Yucca Mountain Site Characterization Project

      SciTech Connect (OSTI)

      Biffle, J.H.

      1993-02-01

      JAC3D is a three-dimensional finite element program designed to solve quasi-static nonlinear mechanics problems. A set of continuum equations describes the nonlinear mechanics involving large rotation and strain. A nonlinear conjugate gradient method is used to solve the equation. The method is implemented in a three-dimensional setting with various methods for accelerating convergence. Sliding interface logic is also implemented. An eight-node Lagrangian uniform strain element is used with hourglass stiffness to control the zero-energy modes. This report documents the elastic and isothermal elastic-plastic material model. Other material models, documented elsewhere, are also available. The program is vectorized for efficient performance on Cray computers. Sample problems described are the bending of a thin beam, the rotation of a unit cube, and the pressurization and thermal loading of a hollow sphere.

    14. EQ3NR, a computer program for geochemical aqueous speciation-solubility calculations: Theoretical manual, user`s guide, and related documentation (Version 7.0); Part 3

      SciTech Connect (OSTI)

      Wolery, T.J.

      1992-09-14

      EQ3NR is an aqueous solution speciation-solubility modeling code. It is part of the EQ3/6 software package for geochemical modeling. It computes the thermodynamic state of an aqueous solution by determining the distribution of chemical species, including simple ions, ion pairs, and complexes, using standard state thermodynamic data and various equations which describe the thermodynamic activity coefficients of these species. The input to the code describes the aqueous solution in terms of analytical data, including total (analytical) concentrations of dissolved components and such other parameters as the pH, pHCl, Eh, pe, and oxygen fugacity. The input may also include a desired electrical balancing adjustment and various constraints which impose equilibrium with special pure minerals, solid solution end-member components (of specified mole fractions), and gases (of specified fugacities). The code evaluates the degree of disequilibrium in terms of the saturation index (SI = 1og Q/K) and the thermodynamic affinity (A = {minus}2.303 RT log Q/K) for various reactions, such as mineral dissolution or oxidation-reduction in the aqueous solution itself. Individual values of Eh, pe, oxygen fugacity, and Ah (redox affinity) are computed for aqueous redox couples. Equilibrium fugacities are computed for gas species. The code is highly flexible in dealing with various parameters as either model inputs or outputs. The user can specify modification or substitution of equilibrium constants at run time by using options on the input file.

    15. History | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Leadership Computing The Argonne Leadership Computing Facility (ALCF) was established at Argonne National Laboratory in 2004 as part of a U.S. Department of Energy (DOE) initiative dedicated to enabling leading-edge computational capabilities to advance fundamental discovery and understanding in a broad range of scientific and engineering disciplines. Supported by the Advanced Scientific Computing Research (ASCR) program within DOE's Office of Science, the ALCF is one half of the DOE Leadership

    16. An introduction to computer viruses

      SciTech Connect (OSTI)

      Brown, D.R.

      1992-03-01

      This report on computer viruses is based upon a thesis written for the Master of Science degree in Computer Science from the University of Tennessee in December 1989 by David R. Brown. This thesis is entitled An Analysis of Computer Virus Construction, Proliferation, and Control and is available through the University of Tennessee Library. This paper contains an overview of the computer virus arena that can help the reader to evaluate the threat that computer viruses pose. The extent of this threat can only be determined by evaluating many different factors. These factors include the relative ease with which a computer virus can be written, the motivation involved in writing a computer virus, the damage and overhead incurred by infected systems, and the legal implications of computer viruses, among others. Based upon the research, the development of a computer virus seems to require more persistence than technical expertise. This is a frightening proclamation to the computing community. The education of computer professionals to the dangers that viruses pose to the welfare of the computing industry as a whole is stressed as a means of inhibiting the current proliferation of computer virus programs. Recommendations are made to assist computer users in preventing infection by computer viruses. These recommendations support solid general computer security practices as a means of combating computer viruses.

    17. Probability of pipe fracture in the primary coolant loop of a PWR plant. Volume 9. PRAISE computer code user's manual. Load Combination Program Project I final report

      SciTech Connect (OSTI)

      Lim, E.Y.

      1981-06-01

      The PRAISE (Piping Reliability Analysis Including Seismic Events) computer code estimates the influence of earthquakes on the probability of failure at a weld joint in the primary coolant system of a pressurized water reactor. Failure, either a through-wall defect (leak) or a complete pipe severance (a large-LOCA), is assumed to be caused by fatigue crack growth of an as-fabricated interior surface circumferential defect. These defects are assumed to be two-dimensional and semi-elliptical in shape. The distribution of initial crack sizes is a function of crack depth and aspect ratio. PRAISE treats the inter-arrival times of operating transients either as a constant or exponentially distributed according to observed or postulated rates. Leak rate and leak detection models are also included. The criterion for complete pipe severance is exceedance of a net section critical stress. Earthquakes of various intensity and arbitrary occurrence times can be modeled. PRAISE presently assumes that exactly one initial defect exists in the weld and that the earthquake of interest is the first earthquake experienced at the reactor. PRAISE has a very modular structure and can be tailored to a variety of crack growth and piping reliability problems. Although PRAISE was developed on a CDC-7600 computer, it was, however, coded in standard FORTRAN IV and is readily transportable to other machines.

    18. ALGEBRA: a computer program that algebraically manipulates finite element output data. [In extended FORTRAN for CDC 7600 or CYBER 76 only

      SciTech Connect (OSTI)

      Richgels, M A; Biffle, J H

      1980-09-01

      ALGEBRA is a program that allows the user to process output data from finite-element analysis codes before they are sent to plotting routines. These data take the form of variable values (stress, strain, and velocity components, etc.) on a tape that is both the output tape from the analyses code and the input tape to ALGEBRA. The ALGEBRA code evaluates functions of these data and writes the function values on an output tape that can be used as input to plotting routines. Convenient input format and error detection capabilities aid the user in providing ALGEBRA with the functions to be evaluated. 1 figure.

    19. Institutional computing (IC) information session

      SciTech Connect (OSTI)

      Koch, Kenneth R; Lally, Bryan R

      2011-01-19

      The LANL Institutional Computing Program (IC) will host an information session about the current state of unclassified Institutional Computing at Los Alamos, exciting plans for the future, and the current call for proposals for science and engineering projects requiring computing. Program representatives will give short presentations and field questions about the call for proposals and future planned machines, and discuss technical support available to existing and future projects. Los Alamos has started making a serious institutional investment in open computing available to our science projects, and that investment is expected to increase even more.

    20. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and...

    1. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    2. Seizure control with thermal energy? Modeling of heat diffusivity in brain tissue and computer-based design of a prototype mini-cooler.

      SciTech Connect (OSTI)

      Osario, I.; Chang, F.-C.; Gopalsami, N.; Nuclear Engineering Division; Univ. of Kansas

      2009-10-01

      Automated seizure blockage is a top priority in epileptology. Lowering nervous tissue temperature below a certain level suppresses abnormal neuronal activity, an approach with certain advantages over electrical stimulation, the preferred investigational therapy for pharmacoresistant seizures. A computer model was developed to identify an efficient probe design and parameters that would allow cooling of brain tissue by no less than 21 C in 30 s, maximum. The Pennes equation and the computer code ABAQUS were used to investigate the spatiotemporal behavior of heat diffusivity in brain tissue. Arrays of distributed probes deliver sufficient thermal energy to decrease, inhomogeneously, brain tissue temperature from 37 to 20 C in 30 s and from 37 to 15 C in 60 s. Tissue disruption/loss caused by insertion of this probe is considerably less than that caused by ablative surgery. This model may be applied for the design and development of cooling devices for seizure control.

    3. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Oak Ridge Climate Change Science Institute Jim Hack Oak Ridge National Laboratory (ORNL) has formed the Oak Ridge Climate Change Science Institute (ORCCSI) that will develop and execute programs for the multi-agency, multi-disciplinary climate change research partnerships at ORNL. Led by Director Jim Hack and Deputy Director Dave Bader, the Institute will integrate scientific projects in modeling, observations, and experimentation with ORNL's powerful computational and informatics capabilities

    4. Computational Nuclear Structure | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Excellent scaling is achieved by the production Automatic Dynamic Load Balancing (ADLB) library on the BG/P. Computational Nuclear Structure PI Name: David Dean Hai Nam PI Email: namha@ornl.gov deandj@ornl.gov Institution: Oak Ridge National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 15 Million Year: 2010 Research Domain: Physics Researchers from Oak Ridge and Argonne national laboratories are using complementary techniques, including Green's Function Monte Carlo, the No

    5. Large Scale Production Computing and Storage Requirements for...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and ...

    6. Computing Sciences Staff Help East Bay High Schoolers Upgrade...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      from underrepresented groups learn about careers in a variety of IT fields, the Laney College Computer Information Systems Department offered its Upgrade: Computer Science Program. ...

    7. Unsolicited Projects in 2012: Research in Computer Architecture...

      Office of Science (SC) Website

      Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I ...

    8. Computer System, Cluster and Networking Summer Institute (CSCNSI...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      is a focused technical enrichment program targeting third-year college undergraduate students currently engaged in a computer science, computer engineering, or similar major. ...

    9. Demystifying computer code for northern New Mexico students

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Laboratory employees recently helped elementary, middle, and high school students in northern New Mexico try their hands at computer programming during the Computer Science ...

    10. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computer security Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved

    11. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    12. Computer Science and Information Technology Student Pipeline

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science and Information Technology Student Pipeline Program Description Los Alamos National Laboratory's High Performance Computing and Information Technology Divisions recruit and hire promising undergraduate and graduate students in the areas of Computer Science, Information Technology, Management Information Systems, Computer Security, Software Engineering, Computer Engineering, and Electrical Engineering. Students are provided a mentor and challenging projects to demonstrate their

    13. TRAC-PF1/MOD1: an advanced best-estimate computer program for pressurized water reactor thermal-hydraulic analysis

      SciTech Connect (OSTI)

      Liles, D.R.; Mahaffy, J.H.

      1986-07-01

      The Los Alamos National Laboratory is developing the Transient Reactor Analysis Code (TRAC) to provide advanced best-estimate predictions of postulated accidents in light-water reactors. The TRAC-PF1/MOD1 program provides this capability for pressurized water reactors and for many thermal-hydraulic test facilities. The code features either a one- or a three-dimensional treatment of the pressure vessel and its associated internals, a two-fluid nonequilibrium hydrodynamics model with a noncondensable gas field and solute tracking, flow-regime-dependent constitutive equation treatment, optional reflood tracking capability for bottom-flood and falling-film quench fronts, and consistent treatment of entire accident sequences including the generation of consistent initial conditions. The stability-enhancing two-step (SETS) numerical algorithm is used in the one-dimensional hydrodynamics and permits this portion of the fluid dynamics to violate the material Courant condition. This technique permits large time steps and, hence, reduced running time for slow transients.

    14. Computational Research and Theory (CRT) Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Research and Theory (CRT) Facility Community Environmental Documents Tours Community Programs Friends of Berkeley Lab ⇒ Navigate Section Community Environmental Documents Tours Community Programs Friends of Berkeley Lab Project Description Wang Hall, previously the Computational Research and Theory Facility, is the new home for high performance computing at LBNL and houses the National Energy Research Scientific Computing Center (NERSC). NERSC supports DOE's mission to discover,

    15. Integrated Computational Materials Engineering (ICME) for Mg...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      and Vehicle Technologies Program Annual Merit Review and Peer Evaluation PDF icon lm012li2011o.pdf More Documents & Publications Integrated Computational Materials Engineering ...

    16. SC e-journals, Computer Science

      Office of Scientific and Technical Information (OSTI)

      Computer Science ACM Letters on Programming Languages and Systems (LOPLAS) ACM Transactions on Applied Perception (TAP) ACM Transactions on Architecture and Code Optimization ...

    17. Computational Scientist | Princeton Plasma Physics Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Department, with interest in leadership class computing of gyrokinetic fusion edge plasma research. A candidate who has knowledge in hybrid parallel programming with MPI, OpenMP,...

    18. Integrated Computational Materials Engineering (ICME) for Mg...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Project (Part 1) Integrated Computational Materials Engineering (ICME) for Mg: International Pilot Project (Part 1) 2010 DOE Vehicle Technologies and Hydrogen Programs Annual Merit...

    19. Computational Design of Interfaces for Photovoltaics | Argonne...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Design of Interfaces for Photovoltaics PI Name: Noa Marom PI Email: nmarom@tulane.edu Institution: Tulane University Allocation Program: ALCC Allocation Hours at...

    20. Discretionary Allocation Request | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Fusion Energy, Magnetic Fusion Materials Science, Condensed Matter and Materials Physics ... This may include information such as: - computational methods - programming model - ...

    1. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    2. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      low-overhead operating system optimized for high performance computing called "Cray Linux Environment" (CLE). This OS supports only a limited number of system calls and UNIX...

    3. Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Advanced Materials Laboratory Center for Integrated Nanotechnologies Combustion Research Facility Computational Science Research Institute Joint BioEnergy Institute About EC News ...

    4. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and...

    5. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center for Computational Sciences

    6. 7th DOE workshop on computer-aided engineering

      SciTech Connect (OSTI)

      Not Available

      1991-01-01

      This report contains the abstracts and the program for the 7th DOE workshop on Computer-Aided Engineering. (LSP)

    7. Wind Energy Program: Top 10 Program Accomplishments | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Wind Energy Program: Top 10 Program Accomplishments Wind Energy Program: Top 10 Program Accomplishments Brochure on the top accomplishments of the Wind Energy Program, including the development of large wind machines, small machines for the residential market, wind tunnel testing, computer codes for modeling wind systems, high definition wind maps, and successful collaborations. top_10_wind_accomplishments (1.84 MB) More Documents & Publications Wind Program Accomplishments DOE Wind Energy

    8. Radiological Worker Computer Based Training

      Energy Science and Technology Software Center (OSTI)

      2003-02-06

      Argonne National Laboratory has developed an interactive computer based training (CBT) version of the standardized DOE Radiological Worker training program. This CD-ROM based program utilizes graphics, animation, photographs, sound and video to train users in ten topical areas: radiological fundamentals, biological effects, dose limits, ALARA, personnel monitoring, controls and postings, emergency response, contamination controls, high radiation areas, and lessons learned.

    9. Introduction to computers: Reference guide

      SciTech Connect (OSTI)

      Ligon, F.V.

      1995-04-01

      The ``Introduction to Computers`` program establishes formal partnerships with local school districts and community-based organizations, introduces computer literacy to precollege students and their parents, and encourages students to pursue Scientific, Mathematical, Engineering, and Technical careers (SET). Hands-on assignments are given in each class, reinforcing the lesson taught. In addition, the program is designed to broaden the knowledge base of teachers in scientific/technical concepts, and Brookhaven National Laboratory continues to act as a liaison, offering educational outreach to diverse community organizations and groups. This manual contains the teacher`s lesson plans and the student documentation to this introduction to computer course.

    10. Integrating Program Component Executables

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Integrating Program Component Executables on Distributed Memory Architectures via MPH Chris Ding and Yun He Computational Research Division, Lawrence Berkeley National Laboratory University of California, Berkeley, CA 94720, USA chqding@lbl.gov, yhe@lbl.gov Abstract A growing trend in developing large and complex ap- plications on today's Teraflop computers is to integrate stand-alone and/or semi-independent program components into a comprehensive simulation package. One example is the climate

    11. Computer-Aided Engineering for Electric Drive Vehicle Batteries (CAEBAT) (Presentation)

      SciTech Connect (OSTI)

      Pesaran, A. A.

      2011-05-01

      This presentation describes NREL's computer aided engineering program for electric drive vehicle batteries.

    12. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mentors Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email 2016: Mentors Bob Robey Bob Robey XCP-2: EULERIAN CODES Bob Robey is a Research Scientist in the Eulerian Applications group at Los Alamos National Laboratory. He is the

    13. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Guide to Los Alamos Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email Guide to Los Alamos During your 10-week internship, we hope you have the opportunity to explore and enjoy Los Alamos and the surrounding area. Here are some

    14. Computer-Aided Design of Materials for use under High Temperature Operating Condition

      SciTech Connect (OSTI)

      Rajagopal, K. R.; Rao, I. J.

      2010-01-31

      The procedures in place for producing materials in order to optimize their performance with respect to creep characteristics, oxidation resistance, elevation of melting point, thermal and electrical conductivity and other thermal and electrical properties are essentially trial and error experimentation that tend to be tremendously time consuming and expensive. A computational approach has been developed that can replace the trial and error procedures in order that one can efficiently design and engineer materials based on the application in question can lead to enhanced performance of the material, significant decrease in costs and cut down the time necessary to produce such materials. The work has relevance to the design and manufacture of turbine blades operating at high operating temperature, development of armor and missiles heads; corrosion resistant tanks and containers, better conductors of electricity, and the numerous other applications that are envisaged for specially structured nanocrystalline solids. A robust thermodynamic framework is developed within which the computational approach is developed. The procedure takes into account microstructural features such as the dislocation density, lattice mismatch, stacking faults, volume fractions of inclusions, interfacial area, etc. A robust model for single crystal superalloys that takes into account the microstructure of the alloy within the context of a continuum model is developed. Having developed the model, we then implement in a computational scheme using the software ABAQUS/STANDARD. The results of the simulation are compared against experimental data in realistic geometries.

    15. Computer Algebra System

      Energy Science and Technology Software Center (OSTI)

      1992-05-04

      DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less

    16. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    17. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    18. Cloud Computing Services

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Services - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management Programs Advanced Nuclear

    19. Multiprocessor programming environment

      SciTech Connect (OSTI)

      Smith, M.B.; Fornaro, R.

      1988-12-01

      Programming tools and techniques have been well developed for traditional uniprocessor computer systems. The focus of this research project is on the development of a programming environment for a high speed real time heterogeneous multiprocessor system, with special emphasis on languages and compilers. The new tools and techniques will allow a smooth transition for programmers with experience only on single processor systems.

    20. Parallel Programming with MPI | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Balaji Rajeev Thakur Ken Raffenetti Halim Amer Speaker(s) Title: Argonne National Laboratory, MCS Event Website: https:www.mcs.anl.gov%7Eraffenetpermalinksargonne16mpi.php ...

    1. Programs & User Facilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science Programs » Office of Science » Programs & User Facilities Programs & User Facilities Enabling remarkable discoveries, tools that transform our understanding of energy and matter and advance national, economic, and energy security Advanced Scientific Computing Research Applied Mathematics Co-Design Centers Exascale Co-design Center for Materials in Extreme Environments (ExMatEx) Center for Exascale Simulation of Advanced Reactors (CESAR) Center for Exascale Simulation of

    2. Parallel programming with PCN

      SciTech Connect (OSTI)

      Foster, I.; Tuecke, S.

      1991-12-01

      PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

    3. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Resources This page is the repository for sundry items of information relevant to general computing on BooNE. If you have a question or problem that isn't answered here, or a suggestion for improving this page or the information on it, please mail boone-computing@fnal.gov and we'll do our best to address any issues. Note about this page Some links on this page point to www.everything2.com, and are meant to give an idea about a concept or thing without necessarily wading through a whole website

    4. Requirements for supercomputing in energy research: The transition to massively parallel computing

      SciTech Connect (OSTI)

      Not Available

      1993-02-01

      This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

    5. About the Advanced Computing Tech Team | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      About the Advanced Computing Tech Team About the Advanced Computing Tech Team The Advanced Computing Tech Team is made up of representatives from DOE and its national laboratories who are involved with developing and using advanced computing tools. The following is a list of some of those programs and what how they are currently using advanced computing in pursuit of their respective missions. Advanced Science Computing Research (ASCR) The mission of the Advanced Scientific Computing Research

    6. Computational trigonometry

      SciTech Connect (OSTI)

      Gustafson, K.

      1994-12-31

      By means of the author`s earlier theory of antieigenvalues and antieigenvectors, a new computational approach to iterative methods is presented. This enables an explicit trigonometric understanding of iterative convergence and provides new insights into the sharpness of error bounds. Direct applications to Gradient descent, Conjugate gradient, GCR(k), Orthomin, CGN, GMRES, CGS, and other matrix iterative schemes will be given.

    7. CNL Programming Considerations on Franklin

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Programming » CNL Programming Considerations on Franklin CNL Programming Considerations on Franklin Shared Libraries (not supported) The Cray XT series currently do not support dynamic loading of executable code or shared libraries. Also, the related LD_PRELOAD environment variable is not supported. It is recommened to run Shared Library applications on Hopper. GNU C Runtime Library glibc Functions The light weight OS on the compute nodes, Compute Node Linux (CNL), is designed to optimize

    8. Hour of Code sparks interest in computer science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      STEM skills Community Connections: Your link to news and opportunities from Los Alamos National Laboratory Latest Issue: September 1, 2016 all issues All Issues » submit Hour of Code sparks interest in computer science Taking the mystery out of programming February 1, 2016 Hour of Code participants work their way through fun computer programming tutorials. Hour of Code participants work their way through fun computer programming tutorials. Contacts Community Programs Director Kathy Keith Email

    9. Theory, Simulation, and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer, Computational, and Statistical Sciences (CCS) Division is an international ... and statistics The deployment and integration of computational technology, ...

    10. Programming Challenges Presentations | U.S. DOE Office of Science...

      Office of Science (SC) Website

      Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I ...

    11. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

    12. Computational Combustion

      SciTech Connect (OSTI)

      Westbrook, C K; Mizobuchi, Y; Poinsot, T J; Smith, P J; Warnatz, J

      2004-08-26

      Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surface and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.

    13. RATIO COMPUTER

      DOE Patents [OSTI]

      Post, R.F.

      1958-11-11

      An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.

    14. Stockpile Stewardship Program Quarterly Experiments | National Nuclear

      National Nuclear Security Administration (NNSA)

      Security Administration | (NNSA) Stockpile Stewardship Program Quarterly Experiments The U.S. Stockpile Stewardship Program is a robust program of scientific inquiry used to sustain and assess the nuclear weapons stockpile without the use of underground nuclear tests. The experiments carried out within the program are used in combination with complex computational models and NNSA's Advanced Simulation and Computing (ASC) Program to assess the safety, security and effectiveness of the

    15. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    16. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Events Computing Events Spotlighting the most advanced scientific and technical applications in the world! Featuring exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies will be seen for the first time in Denver. Supercomputing Conference 13 Denver, Colorado November 17-22, 2013 Spotlighting the most advanced scientific and technical applications in the world, SC13 will bring together the international

    17. GPU Computing - Dirac.pptx

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      GPU Computing with Dirac Hemant Shukla 2 Architectural Differences 2 ALU Cache DRAM Control Logic DRAM CPU GPU 512 cores 10s t o 1 00s o f t hreads p er c ore Latency i s h idden b y f ast c ontext switching Less t han 2 0 c ores 1---2 t hreads p er c ore Latency i s h idden b y l arge c ache 3 Programming Models 3 CUDA (Compute Unified Device Architecture) OpenCL Microsoft's DirectCompute Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, MATLAB, IDL, and

    18. Multiprocessor computing for images

      SciTech Connect (OSTI)

      Cantoni, V. ); Levialdi, S. )

      1988-08-01

      A review of image processing systems developed until now is given, highlighting the weak points of such systems and the trends that have dictated their evolution through the years producing different generations of machines. Each generation may be characterized by the hardware architecture, the programmability features and the relative application areas. The need for multiprocessing hierarchical systems is discussed focusing on pyramidal architectures. Their computational paradigms, their virtual and physical implementation, their programming and software requirements, and capabilities by means of suitable languages, are discussed.

    19. Computing and Computational Sciences Directorate - Computer Science and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics Division Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Our mission includes basic research in computational sciences and application of advanced computing systems, computational, mathematical and analysis techniques to the solution of scientific problems of national importance. We seek to work

    20. Stewardship Science Graduate Fellowship Programs | National Nuclear

      National Nuclear Security Administration (NNSA)

      Security Administration | (NNSA) Home / content Stewardship Science Graduate Fellowship Programs The Computational Science Graduate Fellowship (CSGF) The Department of Energy Computational Science Graduate Fellowship program provides outstanding benefits and opportunities to students pursuing doctoral degrees in fields of study that use high performance computing to solve complex science and engineering problems. The program fosters a community of bright, energetic and committed Ph.D.

    1. University Program in Advanced Technology | National Nuclear...

      National Nuclear Security Administration (NNSA)

      ASC at the Labs Supercomputers University Partnerships Predictive Science Academic ... ASC Program Elements Facility Operations and User Support Computational Systems & Software ...

    2. Program Structure | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      ASC at the Labs Supercomputers University Partnerships Predictive Science Academic ... ASC Program Elements Facility Operations and User Support Computational Systems & Software ...

    3. SEP Program Planning Template ("Program Planning Template") ...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      SEP Program Planning Template ("Program Planning Template") SEP Program Planning Template ("Program Planning Template") Program Planning Template More Documents & Publications...

    4. Center for Computing Research Summer Research Proceedings 2015.

      SciTech Connect (OSTI)

      Bradley, Andrew Michael; Parks, Michael L.

      2015-12-18

      The Center for Computing Research (CCR) at Sandia National Laboratories organizes a summer student program each summer, in coordination with the Computer Science Research Institute (CSRI) and Cyber Engineering Research Institute (CERI).

    5. Program Evaluation: Program Life Cycle

      Broader source: Energy.gov [DOE]

      In general, different types of evaluation are carried out over different parts of a program's life cycle (e.g., Creating a program, Program is underway, or Closing out or end of program)....

    6. Bringing Advanced Computational Techniques to Energy Research

      SciTech Connect (OSTI)

      Mitchell, Julie C

      2012-11-17

      Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

    7. Program Managers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied Mathematics: Pieter Swart, T-5 Computer Science: Pat McCormick, CCS-1 Computational Partnerships: Galen Shipman, CCS-7 Basic Energy Sciences Materials Sciences & ...

    8. Computing Resources | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Computing Resources Theory and Computing Sciences Building Argonne's Theory and Computing Sciences (TCS) building houses a wide variety of computing systems including some of the most powerful supercomputers in the world. The facility has 25,000 square feet of raised computer floor space and a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. The building also

    9. Development of computer graphics

      SciTech Connect (OSTI)

      Nuttall, H.E.

      1989-07-01

      The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

    10. Sandia National Laboratories: Advanced Simulation and Computing: Facilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Operation & User Support Facilities Operation & User Support APPRO The Facilities, Operations and User Support (FOUS) program is responsible for operating and maintaining the computing systems procured by the Advanced Simulation and Computing (ASC) program, and for delivering additional computing related services to Defense Program customers located across the Nuclear Weapons Complex. Sandia has developed a robust User Support capability which provides various services to analysts,

    11. Science at ALCF | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Featured Science Simulation of cosmic reionization Cosmic Reionization On Computers Nickolay Gnedin Allocation Program: INCITE Allocation Hours: 65 Million Science at ALCF Allocation Program - Any - INCITE ALCC ESP Director's Discretionary Year Year -Year 2008 2009 2010 2011 2012 2013 2014 2015 2016 Research Domain - Any - Physics Mathematics Computer Science Chemistry Earth Science Energy Technologies Materials Science Engineering Biological Sciences Apply sort descending An example of a

    12. Avanced Large-scale Integrated Computational Environment

      Energy Science and Technology Software Center (OSTI)

      1998-10-27

      The ALICE Memory Snooper is a software applications programming interface (API) and library for use in implementing computational steering systems. It allows distributed memory parallel programs to publish variables in the computation that may be accessed over the Internet. In this way, users can examine and even change the variables in their running application remotely. The API and library ensure the consistency of the variables across the distributed memory system.

    13. Computational Studies of Nucleosome Stability | Argonne Leadership

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Facility nucleosome 1KX5 Image of the nucleosome 1KX5 from the Protein Data Bank (from X. Zhu, TACC). This DNA/protein complex will serve as the primary target of simulation studies to be performed by the Schatz group as part of the INCITE program. Computational Studies of Nucleosome Stability PI Name: George Schatz PI Email: schatz@chem.northwestern.edu Institution: Northwestern University Allocation Program: INCITE Allocation Hours at ALCF: 20 Million Year: 2013 Research Domain:

    14. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    15. Computational Systems & Software Environment | National Nuclear Security

      National Nuclear Security Administration (NNSA)

      Administration | (NNSA) Computational Systems & Software Environment The mission of this national sub-program is to build integrated, balanced, and scalable computational capabilities to meet the predictive simulation requirements of NNSA. This sub-program strives to provide users of ASC computing resources a stable and seamless computing environment for all ASC-deployed platforms. Along with these powerful systems that ASC will maintain and field the supporting software infrastructure

    16. Computational and Experimental Screening of Mixed-Metal Perovskite

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Administration | (NNSA) Computational Systems & Software Environment The mission of this national sub-program is to build integrated, balanced, and scalable computational capabilities to meet the predictive simulation requirements of NNSA. This sub-program strives to provide users of ASC computing resources a stable and seamless computing environment for all ASC-deployed platforms. Along with these powerful systems that ASC will maintain and field the supporting software infrastructure

    17. Intro - High Performance Computing for 2015 HPC Annual Report

      SciTech Connect (OSTI)

      Klitsner, Tom

      2015-10-01

      The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

    18. ASCR Leadership Computing Challenge Requests for Time Due February...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      laboratories, academia and industry. This program allocates time at NERSC and the Leadership Computing Facilities at Argonne and Oak Ridge. Areas of interest are: Advancing...

    19. INCITE grants awarded to 56 computational research projects ...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      "The INCITE program drives some of the world's most ambitious and groundbreaking computational research in science and engineering," said James Hack, director of the National ...

    20. Program predicts waterflooding performance

      SciTech Connect (OSTI)

      Fassihi, M.R.; O'Brien, W.J.

      1987-04-01

      Water is a handheld calculator program for estimating waterflooding performance in a multilayered oil reservoir for patterns such as five-spot, direct line drive and staggered line drive. Topics considered in this paper include oil wells, sweep efficiency, well stimulation, computer calculations, stratification, enhanced recovery, calculators, reservoir rock, and reservoir engineering.

    1. Center for Computational Excellence | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Center for Computational Excellence The Center for Computational Excellence (CCE) provides the connections, resources, and expertise that facilitate a more common HEP computing environment and when possible move away from experiment-specific software. This means helping members of the community connect to one another to avoid reinventing the wheel by find existing solutions or engineering experiment-independent solutions. HEP-CCE activity will take place under three types of programs. The first

    2. Vehicle Technologies Office Merit Review 2014: Significant Enhancement of Computational Efficiency in Nonlinear Multiscale Battery Model for Computer Aided Engineering

      Broader source: Energy.gov [DOE]

      Presentation given by NREL at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about significant enhancement of computational...

    3. Parallel programming with PCN

      SciTech Connect (OSTI)

      Foster, I.; Tuecke, S.

      1993-01-01

      PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

    4. Program evaluation: Weatherization Residential Assistance Partnership (WRAP) Program

      SciTech Connect (OSTI)

      Jacobson, Bonnie B.; Lundien, Barbara; Kaufman, Jeffrey; Kreczko, Adam; Ferrey, Steven; Morgan, Stephen

      1991-12-01

      The Weatherization Residential Assistance Partnership,'' or WRAP program, is a fuel-blind conservation program designed to assist Northeast Utilities' low-income customers to use energy safely and efficiently. Innovative with respect to its collaborative approach and its focus on utilizing and strengthening the existing low-income weatherization service delivery network, the WRAP program offers an interesting model to other utilities which traditionally have relied on for-profit energy service contractors and highly centralized program implementation structures. This report presents appendices with surveys, participant list, and computers program to examine and predict potential energy savings.

    5. Program Administration

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1997-08-21

      This volume describes program administration that establishes and maintains effective organizational management and control of the emergency management program. Canceled by DOE G 151.1-3.

    6. Weatherization Program

      Broader source: Energy.gov [DOE]

      Residences participating in the Home Energy Rebate or New Home Rebate Program may not also participate in the Weatherization Program

    7. Reactor Safety Research Programs

      SciTech Connect (OSTI)

      Edler, S. K.

      1981-07-01

      This document summarizes the work performed by Pacific Northwest Laboratory (PNL) from January 1 through March 31, 1981, for the Division of Reactor Safety Research within the U.S. Nuclear Regulatory Commission (NRC). Evaluations of nondestructive examination (NDE) techniques and instrumentation are reported; areas of investigation include demonstrating the feasibility of determining the strength of structural graphite, evaluating the feasibility of detecting and analyzing flaw growth in reactor pressure boundary systems, examining NDE reliability and probabilistic fracture mechanics, and assessing the integrity of pressurized water reactor (PWR) steam generator tubes where service-induced degradation has been indicated. Experimental data and analytical models are being provided to aid in decision-making regarding pipeto- pipe impacts following postulated breaks in high-energy fluid system piping. Core thermal models are being developed to provide better digital codes to compute the behavior of full-scale reactor systems under postulated accident conditions. Fuel assemblies and analytical support are being provided for experimental programs at other facilities. These programs include loss-ofcoolant accident (LOCA) simulation tests at the NRU reactor, Chalk River, Canada; fuel rod deformation, severe fuel damage, and postaccident coolability tests for the ESSOR reactor Super Sara Test Program, Ispra, Italy; the instrumented fuel assembly irradiation program at Halden, Norway; and experimental programs at the Power Burst Facility, Idaho National Engineering Laboratory (INEL). These programs will provide data for computer modeling of reactor system and fuel performance during various abnormal operating conditions.

    8. ASCR Workshop on Quantum Computing for Science

      SciTech Connect (OSTI)

      Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

      2015-06-01

      This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

    9. Cosmic Reionization On Computers | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Numerical model of cosmic reionization Numerical model of cosmic reionization. Brown non-transparent fog renders neutral gas, glowing blue is dense ionized gas (which becomes completely transparent when it is not dense); yellow dots are galaxies. Credit: Nick Gnedin, Fermilab Cosmic Reionization On Computers PI Name: Nickolay Gnedin PI Email: gnedin@fnal.gov Institution: Fermilab Allocation Program: INCITE Allocation Hours at ALCF: 74 Million Year: 2015 Research Domain: Physics Cosmic

    10. Visiting Faculty Program Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Visiting Faculty Program Program Description The Visiting Faculty Program seeks to increase the research competitiveness of faculty members and their students at institutions historically underrepresented in the research community in order to expand the workforce vital to Department of Energy mission areas. As part of the program, selected university/college faculty members collaborate with DOE laboratory research staff on a research project of mutual interest. Program Objective The program is

    11. Substation grounding programs

      SciTech Connect (OSTI)

      Meliopoulos, A.P.S. . Electric Power Lab.)

      1992-05-01

      This document is a users manual and applications guide for the software package SGA. This package comprises four computer programs, namely SOMIP, SMECC, SGSYS, and TGRND. The first three programs are analysis models which are to be used in the design process of substation grounding systems. The fourth program, TGRND, is an analysis program for determining the transient response of a grounding system. This report, Volume 5, is an applications guide of the three computer programs. SOMIP, SMECC, and SGSYS, for the purpose of designing a safe substation grounding system. The applications guide utilizes four example substation grounding systems for the purpose of illustrating the application of the programs, SOMIP, SMECC, and SGSYS. The examples are based on data provided by four contributing utilities, namely, Houston Lighting and Power Company, Southern Company Services, Puget Sound Power and Light Company, and Arizona Public Service Company. For the purpose of illustrating specific capabilities of the computer programs, the data have been modified. As a result, the final designs of the four systems do not necessarily represent actual grounding system designs by these utilities. The example system 1 is a 138 kV/35 kV distribution substation. The example system 2 is a medium size 230 kV/115 kV transmission substation. The third example system is a generation substation while the last is a large 525 kV/345 kV/230 kV transmission substation. The four examples cover most of the practical problems that a user may encounter in the design of substation grounding systems.

    12. Visiting Faculty Program Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      covers stipend and travel reimbursement for the 10-week program. Teacherfaculty participants: 1 Program Coordinator: Scott Robbins Email: srobbins@lanl.gov Phone number: 663-5621...

    13. Ten Projects Awarded NERSC Allocations under DOE's ALCC Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Ten Projects Awarded NERSC Allocations under DOE's ALCC Program Ten Projects Awarded NERSC Allocations under DOE's ALCC Program June 24, 2014 43251113992ff3baa1edb NERSC Computer Room. Photo by Roy Kaltschmidt, LBNL Under the Department of Energy's (DOE) ASCR Leadership Computing Challenge (ALCC) program, 10 research teams at national laboratories and universities have been awarded 382.5 million hours of computing time at the National Energy Research Scientific Computing Center (NERSC). The

    14. A Survey of Techniques for Approximate Computing

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Mittal, Sparsh

      2016-03-18

      Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

    15. (Sparsity in large scale scientific computation)

      SciTech Connect (OSTI)

      Ng, E.G.

      1990-08-20

      The traveler attended a conference organized by the 1990 IBM Europe Institute at Oberlech, Austria. The theme of the conference was on sparsity in large scale scientific computation. The conference featured many presentations and other activities of direct interest to ORNL research programs on sparse matrix computations and parallel computing, which are funded by the Applied Mathematical Sciences Subprogram of the DOE Office of Energy Research. The traveler presented a talk on his work at ORNL on the development of efficient algorithms for solving sparse nonsymmetric systems of linear equations. The traveler held numerous technical discussions on issues having direct relevance to the research programs on sparse matrix computations and parallel computing at ORNL.

    16. Computational Electronics and Electromagnetics

      SciTech Connect (OSTI)

      DeFord, J.F.

      1993-03-01

      The Computational Electronics and Electromagnetics thrust area is a focal point for computer modeling activities in electronics and electromagnetics in the Electronics Engineering Department of Lawrence Livermore National Laboratory (LLNL). Traditionally, they have focused their efforts in technical areas of importance to existing and developing LLNL programs, and this continues to form the basis for much of their research. A relatively new and increasingly important emphasis for the thrust area is the formation of partnerships with industry and the application of their simulation technology and expertise to the solution of problems faced by industry. The activities of the thrust area fall into three broad categories: (1) the development of theoretical and computational models of electronic and electromagnetic phenomena, (2) the development of useful and robust software tools based on these models, and (3) the application of these tools to programmatic and industrial problems. In FY-92, they worked on projects in all of the areas outlined above. The object of their work on numerical electromagnetic algorithms continues to be the improvement of time-domain algorithms for electromagnetic simulation on unstructured conforming grids. The thrust area is also investigating various technologies for conforming-grid mesh generation to simplify the application of their advanced field solvers to design problems involving complicated geometries. They are developing a major code suite based on the three-dimensional (3-D), conforming-grid, time-domain code DSI3D. They continue to maintain and distribute the 3-D, finite-difference time-domain (FDTD) code TSAR, which is installed at several dozen university, government, and industry sites.

    17. TRIDAC host computer functional specification

      SciTech Connect (OSTI)

      Hilbert, S.M.; Hunter, S.L.

      1983-08-23

      The purpose of this document is to outline the baseline functional requirements for the Triton Data Acquisition and Control (TRIDAC) Host Computer Subsystem. The requirements presented in this document are based upon systems that currently support both the SIS and the Uranium Separator Technology Groups in the AVLIS Program at the Lawrence Livermore National Laboratory and upon the specific demands associated with the extended safe operation of the SIS Triton Facility.

    18. Back to the ASCR Program Documents Page | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Program Documents » ASCR Program Documents Archive Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources Featured Content ASCR Discovery ASCR Program Documents ASCR Program Documents Archive HPC Workshop Series ASCR Workshops and Conferences Contact Information Advanced Scientific Computing Research U.S. Department of Energy

    19. Community Programs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Community Programs Community Environmental Documents Tours Community Programs Friends of Berkeley Lab ⇒ Navigate Section Community Environmental Documents Tours Community Programs Friends of Berkeley Lab Community Education Programs Workforce Development & Education As part of the Lab's education mission to inspire and prepare the next generation of scientists and engineers, the Workforce Development & Education runs numerous education programs for all ages of students-from elementary

    20. CAP Program Guidance | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      CAP Program Guidance CAP Program Guidance In 2002, the Department of Energy signed an interagency agreement with the Department of Defense's Computer/Electronic Accommodations Program (CAP) program to provide assistive/adaptive technology free of charge to DOE employees with disabilities. The following information regarding CAP is being provided to assist federal employees, managers and on- site disability coordinators with the CAP application process. CAP Program Guidance (40.26 KB) Responsible

    1. GPU COMPUTING FOR PARTICLE TRACKING

      SciTech Connect (OSTI)

      Nishimura, Hiroshi; Song, Kai; Muriki, Krishna; Sun, Changchun; James, Susan; Qin, Yong

      2011-03-25

      This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculation of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.

    2. ESnet Program Plan 1994

      SciTech Connect (OSTI)

      Merola, S.

      1994-11-01

      This Program Plan characterizes ESnet with respect to the current and future needs of Energy Research programs for network infrastructure, services, and development. In doing so, this document articulates the vision and recommendations of the ESnet Steering Committee regarding ESnet`s development and its support of computer networking facilities and associated user services. To afford the reader a perspective from which to evaluate the ever-increasing utility of networking to the Energy Research community, we have also provided a historical overview of Energy Research networking. Networking has become an integral part of the work of DOE principal investigators, and this document is intended to assist the Office of Scientific Computing in ESnet program planning and management, including prioritization and funding. In particular, we identify the new directions that ESnet`s development and implementation will take over the course of the next several years. Our basic goal is to ensure that the networking requirements of the respective scientific programs within Energy Research are addressed fairly. The proliferation of regional networks and additional network-related initiatives by other Federal agencies is changing the process by which we plan our own efforts to serve the DOE community. ESnet provides the Energy Research community with access to many other peer-level networks and to a multitude of other interconnected network facilities. ESnet`s connectivity and relationship to these other networks and facilities are also described in this document. Major Office of Energy Research programs are managed and coordinated by the Office of Basic Energy Sciences, the Office of High Energy and Nuclear Physics, the Office of Magnetic Fusion Energy, the Office of Scientific Computing, and the Office of Health and Environmental Research. Summaries of these programs are presented, along with their functional and technical requirements for wide-area networking.

    3. Applications of Parallel Computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course...

    4. Light Water Reactor Sustainability Program - Integrated Program...

      Office of Environmental Management (EM)

      Program - Integrated Program Plan Light Water Reactor Sustainability Program - Integrated Program Plan The Light Water Reactor Sustainability (LWRS) Program is a research and ...

    5. Theory, Modeling and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Theory, Modeling and Computation Theory, Modeling and Computation The sophistication of modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme computing. CONTACT Jack Shlachter (505) 665-1888 Email Extreme Computing to Power Accurate Atomistic Simulations Advances in high-performance computing and theory allow longer and larger atomistic simulations than currently possible.

    6. ASCR Leadership Computing Challenge Requests for Time Due February 14

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Requests for Time Due February 14 ASCR Leadership Computing Challenge Requests for Time Due February 14 November 17, 2011 by Francesca Verdier The ASCR Leadership Computing Challenge (ALCC) program is open to scientists from the research community in national laboratories, academia and industry. This program allocates time at NERSC and the Leadership Computing Facilities at Argonne and Oak Ridge. Areas of interest are: Advancing the clean energy agenda. Understanding the environmental impacts of

    7. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy ...

    8. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    9. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

    10. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    11. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental ...

    12. Aurora ESP Call for Proposals | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Early Science Program Aurora ESP Call for Proposals Aurora ESP Proposal Instructions INCITE Program ALCC Program Director's Discretionary (DD) Program ALCF Data Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Aurora ESP Call for Proposals Aurora ESP In late 2018, the Argonne Leadership Computing Facility (ALCF) will deploy Aurora, a new Intel-Cray system based on the third-generation Intel® Xeon

    13. Previous Computer Science Award Announcements | U.S. DOE Office of Science

      Office of Science (SC) Website

      (SC) Previous Computer Science Award Announcements Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing

    14. Argonne's Laboratory computing center - 2007 annual report.

      SciTech Connect (OSTI)

      Bair, R.; Pieper, G. W.

      2008-05-28

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and

    15. Retiree Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Library Services » Retiree Program Retiree Program The Research Library offers a 1 year library card to retired LANL employees that allows usage of Library materials. This service is only available to retired LANL employees. Who is eligible? Any Laboratory retiree, not participating in any other program (ie, Guest Scientist, Affiliate). Upon completion of your application, you will be notified of your acceptance into the program. This does not include past students. What is the term of the

    16. HVAC Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      New Commercial Program Development Commercial Current Promotions Industrial Federal Agriculture Heating Ventilation and Air Conditioning Energy efficient Heating Ventilation and...

    17. SC11 Education Program Applications due July 31

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SC11 Education Program Applications due July 31 SC11 Education Program Applications due July 31 June 9, 2011 by Francesca Verdier Applications for the Education Program are now being accepted. Submission website: https://submissions.supercomputing.org Applications deadline: Sunday, July 31, 2011 Acceptance Notifications: Monday, August 22, 2011 The Education Program is hosting a four-day intensive program that will immerse participants in High Performance Computing (HPC) and Computational and

    18. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons

    19. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons

    20. Computing for Finance

      SciTech Connect (OSTI)

      2010-03-24

      remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons

    1. TORCH Computational Reference Kernels - A Testbed for Computer Science Research

      SciTech Connect (OSTI)

      Kaiser, Alex; Williams, Samuel Webb; Madduri, Kamesh; Ibrahim, Khaled; Bailey, David H.; Demmel, James W.; Strohmaier, Erich

      2010-12-02

      For decades, computer scientists have sought guidance on how to evolve architectures, languages, and programming models in order to improve application performance, efficiency, and productivity. Unfortunately, without overarching advice about future directions in these areas, individual guidance is inferred from the existing software/hardware ecosystem, and each discipline often conducts their research independently assuming all other technologies remain fixed. In today's rapidly evolving world of on-chip parallelism, isolated and iterative improvements to performance may miss superior solutions in the same way gradient descent optimization techniques may get stuck in local minima. To combat this, we present TORCH: A Testbed for Optimization ResearCH. These computational reference kernels define the core problems of interest in scientific computing without mandating a specific language, algorithm, programming model, or implementation. To compliment the kernel (problem) definitions, we provide a set of algorithmically-expressed verification tests that can be used to verify a hardware/software co-designed solution produces an acceptable answer. Finally, to provide some illumination as to how researchers have implemented solutions to these problems in the past, we provide a set of reference implementations in C and MATLAB.

    2. Cosmic Reionization On Computers | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      its Cosmic Reionization On Computers (CROC) project, using the Adaptive Refinement Tree (ART) code as its main simulation tool. An important objective of this research is to make...

    3. Computational Science and Engineering

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science and Engineering NETL's Computational Science and Engineering competency consists of conducting applied scientific research and developing physics-based simulation models, methods, and tools to support the development and deployment of novel process and equipment designs. Research includes advanced computations to generate information beyond the reach of experiments alone by integrating experimental and computational sciences across different length and time scales. Specific

    4. High performance computing and communications: FY 1997 implementation plan

      SciTech Connect (OSTI)

      1996-12-01

      The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

    5. Aurora Early Science Program Proposal Instructions | Argonne Leadership

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Facility Early Science Program Aurora ESP Call for Proposals Aurora ESP Proposal Instructions INCITE Program ALCC Program Director's Discretionary (DD) Program ALCF Data Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Aurora Early Science Program Proposal Instructions Aurora ESP General Information and Submission Instructions Our intent is for Aurora Early Science Program (ESP) proposals

    6. Timothy Williams | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Timothy Williams Deputy Director of Science Timothy Williams Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 2129 Argonne, IL 60439 630-252-1154 tjwilliams@anl.gov http://alcf.anl.gov/~zippy Tim Williams is a computational scientist at the Argonne Leadership Computing Facility (ALCF), where he serves as Deputy Director of Science. He is manager of the Early Science Program, which prepares scientific applications for early use of the facility's next-generation

    7. Lattice QCD | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      1 Research Domain: Physics We propose to use the Argonne Leadership Class Computing Facility's BlueGene/P and the Oak Ridge Leadership Class Computing Facility's Cray XT4/XT5 to dramatically advance our research in lattice quantum chromodynamics and other strongly coupled field theories of importance to the study of high energy and nuclear physics. This research addresses fundamental questions in high energy and nuclear physics, and is directly related to major experimental programs in these

    8. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    9. Unsolicited Projects in 2012: Research in Computer Architecture, Modeling,

      Office of Science (SC) Website

      and Evolving MPI for Exascale | U.S. DOE Office of Science (SC) 2: Research in Computer Architecture, Modeling, and Evolving MPI for Exascale Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities

    10. Large Scale Production Computing and Storage Requirements for Fusion Energy

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences" is organized by the Department of Energy's Office of Fusion Energy Sciences (FES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to

    11. Large Scale Production Computing and Storage Requirements for High Energy

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Physics: Target 2017 Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 HEPlogo.jpg The NERSC Program Requirements Review "Large Scale Computing and Storage Requirements for High Energy Physics" is organized by the Department of Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to characterize

    12. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons

    13. Exploring HPCS Languages in Scientific Computing

      SciTech Connect (OSTI)

      Barrett, Richard F; Alam, Sadaf R; de Almeida, Valmor F; Bernholdt, David E; Elwasif, Wael R; Kuehn, Jeffery A; Poole, Stephen W; Shet, Aniruddha G

      2008-01-01

      As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else.

    14. Scalable Computer Performance and Analysis (Hierarchical INTegration)

      Energy Science and Technology Software Center (OSTI)

      1999-09-02

      HINT is a program to measure a wide variety of scalable computer systems. It is capable of demonstrating the benefits of using more memory or processing power, and of improving communications within the system. HINT can be used for measurement of an existing system, while the associated program ANALYTIC HINT can be used to explain the measurements or as a design tool for proposed systems.

    15. Towards Energy-Centric Computing and Computer Architecture

      SciTech Connect (OSTI)

      2011-02-09

      Technology forecasts indicate that device scaling will continue well into the next decade. Unfortunately, it is becoming extremely difficult to harness this increase in the number of transistorsinto performance due to a number of technological, circuit, architectural, methodological and programming challenges.In this talk, I will argue that the key emerging showstopper is power. Voltage scaling as a means to maintain a constant power envelope with an increase in transistor numbers is hitting diminishing returns. As such, to continue riding the Moore's law we need to look for drastic measures to cut power. This is definitely the case for server chips in future datacenters,where abundant server parallelism, redundancy and 3D chip integration are likely to remove programming, reliability and bandwidth hurdles, leaving power as the only true limiter.I will present results backing this argument based on validated models for future server chips and parameters extracted from real commercial workloads. Then I use these results to project future research directions for datacenter hardware and software.About the speakerBabak Falsafi is a Professor in the School of Computer and Communication Sciences at EPFL, and an Adjunct Professor of Electrical and Computer Engineering and Computer Science at Carnegie Mellon. He is thefounder and the director ofthe Parallel Systems Architecture Laboratory (PARSA) at EPFL where he conducts research onarchitectural support for parallel programming, resilient systems, architectures to break the memory wall, and analytic and simulation tools for computer system performance evaluation.In 1999, in collaboration with T. N. Vijaykumar he showed for the first time that, contrary to conventional wisdom,multiprocessors do not needrelaxed memory consistency models (and the resulting convoluted programming interfaces found and used in modern systems) to achieve high performance. He is a recipient of an NSF CAREER award in 2000

    16. Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Program Description SAGE, the Summer of Applied Geophysical Experience, is a unique educational program designed to introduce students in geophysics and related fields to "hands on" geophysical exploration and research. The program emphasizes both teaching of field methods and research related to basic science and a variety of applied problems. SAGE is hosted by the National Security Education Center and the Earth and Environmental Sciences Division of the Los Alamos National

    17. Thermal battery statistics and plotting programs

      SciTech Connect (OSTI)

      Scharrer, G.L.

      1990-04-01

      Thermal battery functional test data are stored in an HP3000 minicomputer operated by the Power Sources Department. A program was written to read data from a battery data base, compute simple statistics (mean, minimum, maximum, standard deviation, and K-factor), print out the results, and store the data in a file for subsequent plotting. A separate program was written to plot the data. The programs were written in the Pascal programming language. 1 tab.

    18. Volunteer Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      National VolunteerMatch Retired and Senior Volunteer Program United Way of Northern New Mexico United Way of Santa Fe County Giving Employee Giving Campaign Holiday Food Drive...

    19. exercise program

      National Nuclear Security Administration (NNSA)

      and dispose of many different hazardous substances, including radioactive materials, toxic chemicals, and biological agents and toxins.

      There are a few programs NNSA uses...

    20. Counterintelligence Program

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1992-09-04

      To establish the policies, procedures, and specific responsibilities for the Department of Energy (DOE) Counterintelligence (CI) Program. This directive does not cancel any other directive.

    1. Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied Geophysical Experience, is a unique educational program designed to introduce students in geophysics and related fields to "hands on" geophysical exploration and research....

    2. Programming Stage

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1997-05-21

      This chapter addresses plans for the acquisition and installation of operating environment hardware and software and design of a training program.

    3. Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      their potential and pursue opportunities in science, technology, engineering and mathematics. Through Expanding Your Horizon (EYH) Network programs, we provide STEM role models...

    4. Special Programs

      Office of Energy Efficiency and Renewable Energy (EERE)

      Headquarters Human Resources Operations promotes a variety of hiring flexibilities for managers to attract a diverse workforce, from Student Internship Program opportunities (Pathways), Veteran...

    5. Counterintelligence Program

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      2004-12-10

      The Order establishes Counterintelligence Program requirements and responsibilities for the Department of Energy, including the National Nuclear Security Administration. Supersedes DOE 5670.3.

    6. Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Program Description Inspiring girls to recognize their potential and pursue opportunities in science, technology, engineering and mathematics. Through Expanding Your Horizon (EYH) ...

    7. Extreme Scale Computing to Secure the Nation

      SciTech Connect (OSTI)

      Brown, D L; McGraw, J R; Johnson, J R; Frincke, D

      2009-11-10

      Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national security requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of nuclear

    8. IMPACTS: Industrial Technologies Program, Summary of Program...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      IMPACTS: Industrial Technologies Program, Summary of Program Results for CY2009 IMPACTS: Industrial Technologies Program, Summary of Program Results for CY2009 ...

    9. PACKAGE (Plasma Analysis, Chemical Kinetics and Generator Efficiency): a computer program for the calculation of partial chemical equilibrium/partial chemical rate controlled composition of multiphased mixtures under one dimensional steady flow

      SciTech Connect (OSTI)

      Yousefian, V.; Weinberg, M.H.; Haimes, R.

      1980-02-01

      The NASA CEC Code was the starting point for PACKAGE, whose function is to evaluate the composition of a multiphase combustion product mixture under the following chemical conditions: (1) total equilibrium with pure condensed species; (2) total equilibrium with ideal liquid solution; (3) partial equilibrium/partial finite rate chemistry; and (4) fully finite rate chemistry. The last three conditions were developed to treat the evolution of complex mixtures such as coal combustion products. The thermodynamic variable pairs considered are either pressure (P) and enthalpy, P and entropy, at P and temperature. Minimization of Gibbs free energy is used. This report gives detailed discussions of formulation and input/output information used in the code. Sample problems are given. The code development, description, and current programming constraints are discussed. (DLC)

    10. Thermal Hydraulic Computer Code System.

      Energy Science and Technology Software Center (OSTI)

      1999-07-16

      Version 00 RELAP5 was developed to describe the behavior of a light water reactor (LWR) subjected to postulated transients such as loss of coolant from large or small pipe breaks, pump failures, etc. RELAP5 calculates fluid conditions such as velocities, pressures, densities, qualities, temperatures; thermal conditions such as surface temperatures, temperature distributions, heat fluxes; pump conditions; trip conditions; reactor power and reactivity from point reactor kinetics; and control system variables. In addition to reactor applications,more » the program can be applied to transient analysis of other thermal‑hydraulic systems with water as the fluid. This package contains RELAP5/MOD1/029 for CDC computers and RELAP5/MOD1/025 for VAX or IBM mainframe computers.« less

    11. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw; Gokhale, Maya B.; McCabe, Kevin Peter

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    12. Computational Nanophotonics: Model Optical Interactions and Transport in Tailored Nanosystem Architectures

      SciTech Connect (OSTI)

      Stockman, Mark; Gray, Steven

      2014-02-21

      The program is directed toward development of new computational approaches to photoprocesses in nanostructures whose geometry and composition are tailored to obtain desirable optical responses. The emphasis of this specific program is on the development of computational methods and prediction and computational theory of new phenomena of optical energy transfer and transformation on the extreme nanoscale (down to a few nanometers).

    13. HSS Voluntary Protection Program: Articles

      Broader source: Energy.gov [DOE]

      AJHA Program - The Automated Job Hazard Analysis (AJHA) computer program is part of an enhanced work planning process employed at the Department of Energy's Hanford worksite. The AJHA system is routinely used to performed evaluations for medium and high risk work, and in the development of corrective maintenance work packages at the site. The tool is designed to ensure that workers are fully involved in identifying the hazards, requirements, and controls associated with tasks.

    14. An Arbitrary Precision Computation Package

      Energy Science and Technology Software Center (OSTI)

      2003-06-14

      This package permits a scientist to perform computations using an arbitrarily high level of numeric precision (the equivalent of hundreds or even thousands of digits), by making only minor changes to conventional C++ or Fortran-90 soruce code. This software takes advantage of certain properties of IEEE floating-point arithmetic, together with advanced numeric algorithms, custom data types and operator overloading. Also included in this package is the "Experimental Mathematician's Toolkit", which incorporates many of these facilitiesmore » into an easy-to-use interactive program.« less

    15. NV Energy -Energy Smart Schools Program | Department of Energy

      Broader source: Energy.gov (indexed) [DOE]

      pending approval Vending Machine Controls Personal Computing Equipment Program Info Sector Name Utility Administrator Nevada Power Company Website http:www.nvenergy.com...

    16. Low latency, high bandwidth data communications between compute nodes in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

      2010-11-02

      Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

    17. Student Internship Programs Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Student Internship Programs Program Description The objective of the Laboratory's student internship programs is to provide students with opportunities for meaningful hands- on experience supporting educational progress in their selected scientific or professional fields. The most significant impact of these internship experiences is observed in the intellectual growth experienced by the participants. Student interns are able to appreciate the practical value of their education efforts in their

    18. Overview of the Defense Programs Research and Technology Development Program for fiscal year 1993. Appendix materials

      SciTech Connect (OSTI)

      Not Available

      1993-09-30

      The pages that follow contain summaries of the nine R&TD Program Element Plans for Fiscal Year 1993 that were completed in the Spring of 1993. The nine program elements are aggregated into three program clusters as follows: Design Sciences and Advanced Computation; Advanced Manufacturing Technologies and Capabilities; and Advanced Materials Sciences and Technology.

    19. Cognitive Computing for Security.

      SciTech Connect (OSTI)

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    20. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    1. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost 3 on the TRACC Cluster Oct. ... with an emphasis on applying these capabilities to build computationally efficient models. ...

    2. Program Description | Robotics Internship Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      March 4, 2016. Apply Now for the Robotics Internship About the Internship Program Description Start of Appointment Renewal of Appointment End of Appointment Stipend Information...

    3. Barnes_NP_Program_Office_Research_Directions_V2.pptx

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Physics (NP): Target 2017 April 29-30, 2014 Bethesda, MD Ted Barnes DOENP Program Manager, Nuclear Data & Nuclear Theory Computing NP Program Office Research Directions n.b. ...

    4. Webinar: AspireIT K-12 Outreach Program

      Broader source: Energy.gov [DOE]

      AspireIT K-12 Outreach Program is a grant that connects high school and college women with K-12 girls interested in computing. Using a near-peer model, program leaders teach younger girls...

    5. Programming Challenges Workshop | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Programming Challenges Workshop Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC)

    6. Cupola Furnace Computer Process Model

      SciTech Connect (OSTI)

      Seymour Katz

      2004-12-31

      The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

    7. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      which would collect more data than any computing center in existence could process. ... consortium grid called Open Science Grid, so they initiated a project known as FermiGrid. ...

    8. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    9. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    10. Computers-BSA.ppt

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Energy Computers, Electronics and Electrical Equipment (2010 MECS) Computers, Electronics and Electrical Equipment (2010 MECS) Manufacturing Energy and Carbon Footprint for Computers, Electronics and Electrical Equipment Sector (NAICS 334, 335) Energy use data source: 2010 EIA MECS (with adjustments) Footprint Last Revised: February 2014 View footprints for other sectors here. Manufacturing Energy and Carbon Footprint Computers, Electronics and Electrical Equipment (123.71 KB) More Documents

    11. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences and Engineering The Computational Sciences and Engineering Division (CSED) is ORNL's premier source of basic and applied research in the field of data sciences and knowledge discovery. CSED's science agenda is focused on research and development related to knowledge discovery enabled by the explosive growth in the availability, size, and variability of dynamic and disparate data sources. This science agenda encompasses data sciences as well as advanced modeling and

    12. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Technology Information Technology (IT) at ORNL serves a diverse community of stakeholders and interests. From everyday operations like email and telecommunications to institutional cluster computing and high bandwidth networking, IT at ORNL is responsible for planning and executing a coordinated strategy that ensures cost-effective, state-of-the-art computing capabilities for research and development. ORNL IT delivers leading-edge products to users in a risk-managed portfolio of

    13. Mathematical and Computational Epidemiology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    14. BNL ATLAS Grid Computing

      ScienceCinema (OSTI)

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    15. Computing environment logbook

      DOE Patents [OSTI]

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    16. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

      SciTech Connect (OSTI)

      Corones, James

      2013-09-23

      High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

    17. NNSA releases Stockpile Stewardship Program quarterly experiments summary |

      National Nuclear Security Administration (NNSA)

      National Nuclear Security Administration | (NNSA) releases Stockpile Stewardship Program quarterly experiments summary May 12, 2015 WASHIGTON, DC. - The National Nuclear Security Administration today released its current quarterly summary of experiments conducted as part of its science-based Stockpile Stewardship Program. The experiments carried out within the program are used in combination with complex computational models and NNSA's Advanced Simulation and Computing (ASC) Program to

    18. Parallel computing in enterprise modeling.

      SciTech Connect (OSTI)

      Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

      2008-08-01

      This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

    19. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    20. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    1. Final Report: Correctness Tools for Petascale Computing

      SciTech Connect (OSTI)

      Mellor-Crummey, John

      2014-10-27

      In the course of developing parallel programs for leadership computing systems, subtle programming errors often arise that are extremely difficult to diagnose without tools. To meet this challenge, University of Maryland, the University of Wisconsin—Madison, and Rice University worked to develop lightweight tools to help code developers pinpoint a variety of program correctness errors that plague parallel scientific codes. The aim of this project was to develop software tools that help diagnose program errors including memory leaks, memory access errors, round-off errors, and data races. Research at Rice University focused on developing algorithms and data structures to support efficient monitoring of multithreaded programs for memory access errors and data races. This is a final report about research and development work at Rice University as part of this project.

    2. Program Overview

      Broader source: Energy.gov [DOE]

      The culture of the DOE community will be based on standards. Technical standards will formally integrate part of all DOE facility, program and project activities. The DOE will be recognized as a...

    3. Deconvolution Program

      Energy Science and Technology Software Center (OSTI)

      1999-02-18

      The program is suitable for a lot of applications in applied mathematics, experimental physics, signal analytical system and some engineering applications range i.e. deconvolution spectrum, signal analysis and system property analysis etc.

    4. Integrated Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Program Review (IPR) Quarterly Business Review (QBR) Access to Capital Debt Management July 2013 Aug. 2013 Sept. 2013 Oct. 2013 Nov. 2013 Dec. 2013 Jan. 2014 Feb. 2014 March...

    5. Science Programs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The focal point for basic and applied R&D programs with a primary focus on energy but also encompassing medical, biotechnology, high-energy physics, and advanced scientific ...

    6. Programming models

      SciTech Connect (OSTI)

      Daniel, David J; Mc Pherson, Allen; Thorp, John R; Barrett, Richard; Clay, Robert; De Supinski, Bronis; Dube, Evi; Heroux, Mike; Janssen, Curtis; Langer, Steve; Laros, Jim

      2011-01-14

      A programming model is a set of software technologies that support the expression of algorithms and provide applications with an abstract representation of the capabilities of the underlying hardware architecture. The primary goals are productivity, portability and performance.

    7. Program Analyst

      Broader source: Energy.gov [DOE]

      A successful candidate in this position will serve as an Program Analyst for the System Operations team in the area of regulatory compliance. The successful candidate will also become a subject...

    8. NNSA?s Computing Strategy, Acquisition Plan, and Basis for Computing Time Allocation

      SciTech Connect (OSTI)

      Nikkel, D J

      2009-07-21

      This report is in response to the Omnibus Appropriations Act, 2009 (H.R. 1105; Public Law 111-8) in its funding of the National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) Program. This bill called for a report on ASC's plans for computing and platform acquisition strategy in support of stockpile stewardship. Computer simulation is essential to the stewardship of the nation's nuclear stockpile. Annual certification of the country's stockpile systems, Significant Finding Investigations (SFIs), and execution of Life Extension Programs (LEPs) are dependent on simulations employing the advanced ASC tools developed over the past decade plus; indeed, without these tools, certification would not be possible without a return to nuclear testing. ASC is an integrated program involving investments in computer hardware (platforms and computing centers), software environments, integrated design codes and physical models for these codes, and validation methodologies. The significant progress ASC has made in the past derives from its focus on mission and from its strategy of balancing support across the key investment areas necessary for success. All these investment areas must be sustained for ASC to adequately support current stockpile stewardship mission needs and to meet ever more difficult challenges as the weapons continue to age or undergo refurbishment. The appropriations bill called for this report to address three specific issues, which are responded to briefly here but are expanded upon in the subsequent document: (1) Identify how computing capability at each of the labs will specifically contribute to stockpile stewardship goals, and on what basis computing time will be allocated to achieve the goal of a balanced program among the labs. (2) Explain the NNSA's acquisition strategy for capacity and capability of machines at each of the labs and how it will fit within the existing budget constraints. (3) Identify the technical

    9. Educational Programs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Educational Programs Educational Programs A collaboration between Los Alamos National Laboratory and the University of California at San Diego (UCSD) Jacobs School of Engineering Contact Institute Director Charles Farrar (505) 663-5330 Email UCSD EI Director Michael Todd (858) 534-5951 Professional Staff Assistant Ellie Vigil (505) 667-2818 Email Administrative Assistant Rebecca Duran (505) 665-8899 Email There are two educational components to the Engineering Institute. The Los Alamos Dynamic

    10. Program Leadership

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Program Leadership - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management Programs Advanced Nuclear

    11. Volunteer Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Volunteer Program Volunteer Program Our good neighbor pledge includes active employee engagement in our communities through volunteering. More than 3,000 current and retired Lab employees have logged more than 1.8 million volunteer hours since 2007. August 19, 2015 Los Alamos National Laboratory employee volunteers with Mountain Canine Corps Lab employee Debbi Miller volunteers for the Mountain Canine Corps with her search and rescue dogs. She also volunteers with another search and rescue

    12. Program Summaries

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Program Summaries Basic Energy Sciences (BES) BES Home About Research Facilities Science Highlights Benefits of BES Funding Opportunities Basic Energy Sciences Advisory Committee (BESAC) Community Resources Program Summaries Brochures Reports Accomplishments Presentations BES and Congress Science for Energy Flow Seeing Matter Nano for Energy Scale of Things Chart Contact Information Basic Energy Sciences U.S. Department of Energy SC-22/Germantown Building 1000 Independence Ave., SW Washington,

    13. Semiconductor Device Analysis on Personal Computers

      Energy Science and Technology Software Center (OSTI)

      1993-02-08

      PC-1D models the internal operation of bipolar semiconductor devices by solving for the concentrations and quasi-one-dimensional flow of electrons and holes resulting from either electrical or optical excitation. PC-1D uses the same detailed physical models incorporated in mainframe computer programs, yet runs efficiently on personal computers. PC-1D was originally developed with DOE funding to analyze solar cells. That continues to be its primary mode of usage, with registered copies in regular use at more thanmore » 100 locations worldwide. The program has been successfully applied to the analysis of silicon, gallium-arsenide, and indium-phosphide solar cells. The program is also suitable for modeling bipolar transistors and diodes, including heterojunction devices. Its easy-to-use graphical interface makes it useful as a teaching tool as well.« less

    14. Parallel programming with Ada

      SciTech Connect (OSTI)

      Kok, J.

      1988-01-01

      To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.

    15. GEO3D - Three-Dimensional Computer Model of a Ground Source Heat Pump System

      DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

      James Menart

      2013-06-07

      This file is the setup file for the computer program GEO3D. GEO3D is a computer program written by Jim Menart to simulate vertical wells in conjunction with a heat pump for ground source heat pump (GSHP) systems. This is a very detailed three-dimensional computer model. This program produces detailed heat transfer and temperature field information for a vertical GSHP system.

    16. Computer-Aided Engineering for Electric Drive Vehicle Batteries (CAEBAT) |

      Broader source: Energy.gov (indexed) [DOE]

      Department of Energy 1 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation es099_pesaran_2011_p.pdf (1.5 MB) More Documents & Publications Overview of Computer-Aided Engineering of Batteries (CAEBAT) and Introduction to Multi-Scale, Multi-Dimensional (MSMD) Modeling of Lithium-Ion Batteries Battery Thermal Modeling and Testing Progress of Computer-Aided Engineering of Batteries (CAEBAT)

    17. Overview of Computer-Aided Engineering of Batteries (CAEBAT) and

      Broader source: Energy.gov (indexed) [DOE]

      Introduction to Multi-Scale, Multi-Dimensional (MSMD) Modeling of Lithium-Ion Batteries | Department of Energy 2 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting es117_pesaran_2012_o.pdf (3.68 MB) More Documents & Publications Progress of Computer-Aided Engineering of Batteries (CAEBAT) Computer-Aided Engineering for Electric Drive Vehicle Batteries (CAEBAT) Vehicle Technologies Office Merit Review 2014: Development of

    18. Sandia Energy - High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    19. Plasma Simulation Program

      SciTech Connect (OSTI)

      Greenwald, Martin

      2011-10-04

      Many others in the fusion energy and advanced scientific computing communities participated in the development of this plan. The core planning team is grateful for their important contributions. This summary is meant as a quick overview the Fusion Simulation Program's (FSP's) purpose and intentions. There are several additional documents referenced within this one and all are supplemental or flow down from this Program Plan. The overall science goal of the DOE Office of Fusion Energy Sciences (FES) Fusion Simulation Program (FSP) is to develop predictive simulation capability for magnetically confined fusion plasmas at an unprecedented level of integration and fidelity. This will directly support and enable effective U.S. participation in International Thermonuclear Experimental Reactor (ITER) research and the overall mission of delivering practical fusion energy. The FSP will address a rich set of scientific issues together with experimental programs, producing validated integrated physics results. This is very well aligned with the mission of the ITER Organization to coordinate with its members the integrated modeling and control of fusion plasmas, including benchmarking and validation activities. [1]. Initial FSP research will focus on two critical Integrated Science Application (ISA) areas: ISA1, the plasma edge; and ISA2, whole device modeling (WDM) including disruption avoidance. The first of these problems involves the narrow plasma boundary layer and its complex interactions with the plasma core and the surrounding material wall. The second requires development of a computationally tractable, but comprehensive model that describes all equilibrium and dynamic processes at a sufficient level of detail to provide useful prediction of the temporal evolution of fusion plasma experiments. The initial driver for the whole device model will be prediction and avoidance of discharge-terminating disruptions, especially at high performance, which are a critical

    20. Student Internship Programs Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      for a summer high school student to 75,000 for a Ph.D. student working full-time for a year. Program Coordinator: Scott Robbins Email: srobbins@lanl.gov Phone number: 663-5621...

    1. High performance computing and communications: FY 1996 implementation plan

      SciTech Connect (OSTI)

      1995-05-16

      The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

    2. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect (OSTI)

      GLIMM,J.

      2001-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    3. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect (OSTI)

      GLIMM,J.

      2003-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    4. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect (OSTI)

      GLIMM,J.

      2002-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    5. The HILDA program

      SciTech Connect (OSTI)

      Close, E.; Fong, C; Lee, E.

      1991-10-30

      Although this report is called a program document, it is not simply a user's guide to running HILDA nor is it a programmer's guide to maintaining and updating HILDA. It is a guide to HILDA as a program and as a model for designing and costing a heavy ion fusion (HIF) driver. HILDA represents the work and ideas of many people; as does the model upon which it is based. The project was initiated by Denis Keefe, the leader of the LBL HIFAR project. He suggested the name HILDA, which is an acronym for Heavy Ion Linac Driver Analysis. The conventions and style of development of the HILDA program are based on the original goals. It was desired to have a computer program that could estimate the cost and find an optimal design for Heavy Ion Fusion induction linac drivers. This program should model near-term machines as well as fullscale drivers. The code objectives were: (1) A relatively detailed, but easily understood model. (2) Modular, structured code to facilitate making changes in the model, the analysis reports, and the user interface. (3) Documentation that defines, and explains the system model, cost algorithm, program structure, and generated reports. With this tool a knowledgeable user would be able to examine an ensemble of drivers and find the driver that is minimum in cost, subject to stated constraints. This document contains a report section that describes how to use HILDA, some simple illustrative examples, and descriptions of the models used for the beam dynamics and component design. Associated with this document, as files on floppy disks, are the complete HILDA source code, much information that is needed to maintain and update HILDA, and some complete examples. These examples illustrate that the present version of HILDA can generate much useful information about the design of a HIF driver. They also serve as guides to what features would be useful to include in future updates. The HPD represents the current state of development of this project.

    6. Programming in Fortran M

      SciTech Connect (OSTI)

      Foster, I.; Olson, R.; Tuecke, S.

      1993-08-01

      Fortran M is a small set of extensions to Fortran that supports a modular approach to the construction of sequential and parallel programs. Fortran M programs use channels to plug together processes which may be written in Fortran M or Fortran 77. Processes communicate by sending and receiving messages on channels. Channels and processes can be created dynamically, but programs remain deterministic unless specialized nondeterministic constructs are used. Fortran M programs can execute on a range of sequential, parallel, and networked computers. This report incorporates both a tutorial introduction to Fortran M and a users guide for the Fortran M compiler developed at Argonne National Laboratory. The Fortran M compiler, supporting software, and documentation are made available free of charge by Argonne National Laboratory, but are protected by a copyright which places certain restrictions on how they may be redistributed. See the software for details. The latest version of both the compiler and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/fortran-m at info.mcs.anl.gov.

    7. Debugging automation tools based on event grammars and computations over traces

      SciTech Connect (OSTI)

      Auguston, M.

      1997-11-01

      This report contains viewgraphs which purpose to research and design software testing and debugging automation tools like a language for computations over source program execution history.

    8. Berkeley Lab Opens State-of-the-Art Facility for Computational...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Complementing NERSC and ESnet in the facility will be research programs in applied mathematics and computer science that develop new methods for advancing scientific discovery. ...

    9. Scientific Cloud Computing Misconceptions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Scientific Cloud Computing Misconceptions Scientific Cloud Computing Misconceptions July 1, 2011 Part of the Magellan project was to understand both the possibilities and the limitations of cloud computing in the pursuit of science. At a recent conference, Magellan investigator Shane Canon outlined some persistent misconceptions about doing science in the cloud - and what Magellan has taught us about them. » Read the ISGTW story. » Download the slides (PDF, 4.1MB

    10. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    11. Energy Aware Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy required to compute is to reduce the power usage on a node. One way to accomplish this is by lowering the frequency at which the CPU operates. However, reducing the clock speed increases the time to solution, creating a potential tradeoff. NERSC continues to examine how such methods impact its operations and its

    12. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or modification. Among NERSC's security goal are: 1. To protect NERSC systems from unauthorized access. 2. To prevent the interruption of services to its users. 3. To prevent misuse or abuse of NERSC resources. Security Incidents If you think there has been a computer security incident you should contact NERSC Security as soon as

    13. Personal Computer Inventory System

      Energy Science and Technology Software Center (OSTI)

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    14. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader ...

    15. Computer-Aided Engineering of Batteries for Designing Better Li-Ion Batteries (Presentation)

      SciTech Connect (OSTI)

      Pesaran, A.; Kim, G. H.; Smith, K.; Lee, K. J.; Santhanagopalan, S.

      2012-02-01

      This presentation describes the current status of the DOE's Energy Storage R and D program, including modeling and design tools and the Computer-Aided Engineering for Automotive Batteries (CAEBAT) program.

    16. Vehicle Technologies Office Merit Review 2013: Accelerating Predictive Simulation of IC Engines with High Performance Computing

      Office of Energy Efficiency and Renewable Energy (EERE)

      Presentation given by Oak Ridge National Laboratory at the 2013 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting about simulating internal combustion engines using high performance computing.

    17. Postdoctoral Program Program Description The Postdoctoral (Postdoc...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Postdoctoral Program Program Description The Postdoctoral (Postdoc) Research program offers the opportunity for appointees to perform research in a robust scientific R&D...

    18. Fault-Oblivious Exascale Computing Environment | Argonne Leadership

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Facility Fault-Oblivious Exascale Computing Environment PI Name: Maya B. Gokhale PI Email: gokhale2@llnl.gov Allocation Program: INCITE Allocation Hours at ALCF: 10,000,000 Year: 2012 Research Domain: Computer Science Two areas of concern that have emerged from several DOE meetings on exascale systems (machines with 100 million cores) are runtime systems which can function at that scale, and fault management. The Fault Oblivious Exascale (FOX) project aims to build a software stack

    19. Large Scale Production Computing and Storage Requirements for Nuclear

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Physics: Target 2017 Large Scale Production Computing and Storage Requirements for Nuclear Physics: Target 2017 NPicon.png This invitation-only review is organized by the Department of Energy's Offices of Nuclear Physics (NP) and Advanced Scientific Computing Research (ASCR) and by NERSC. The goal is to determine production high-performance computing, storage, and services that will be needed for NP to achieve its science goals through 2017. The review brings together DOE Program Managers,

    20. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      60 Years of Computing 60 Years of Computing

    1. 2011 Computation Directorate Annual Report

      SciTech Connect (OSTI)

      Crawford, D L

      2012-04-11

      From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilities and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated

    2. Navajo Electrification Demonstration Program

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Future Plans * Navajo Electrification Demonstration Program -Video OBJECTIVES OBJECTIVES " ... Navajo Electrification Demonstration Navajo Electrification Demonstration Program Program ...

    3. Guidelines for Academic Cooperation Program (ACP)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      academic cooperation program Guidelines for Academic Cooperation Program (ACP) It is the responsibility of all Academic Cooperation Program (ACP) users to: Understand their task(s) Understand the potential hazards associated with their experiment(s) Comply fully with all LLNL safety and computer security regulations and procedures. Incoming ACPs must work under mandatory line-of-sight supervision at, and above, the Work Authorization Level B per ES&H Manual Document 2.2, Table 2 on page 7.

    4. Cori Phase 1 Training: Programming and Optimization

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Optimization Cori Phase 1 Training: Programming and Optimization NERSC will host a four-day training event for Cori Phase 1 users on Programming Environment, Debugging and Optimization from Monday June 13 to Thursday June 16. The presenters will be Cray instructor Rick Slick and NERSC staff. Cray XC Series Programming and Optimization Description This course is intended for people who work in applications support or development of Cray XC Series computer systems. It familiarizes students with

    5. The Macolumn - the Mac gets geophysical. [A review of geophysical software for the Apple Macintosh computer

      SciTech Connect (OSTI)

      Busbey, A.B. )

      1990-02-01

      Seismic Processing Workshop, a program by Parallel Geosciences of Austin, TX, is discussed in this column. The program is a high-speed, interactive seismic processing and computer analysis system for the Apple Macintosh II family of computers. Also reviewed in this column are three products from Wilkerson Associates of Champaign, IL. SubSide is an interactive program for basin subsidence analysis; MacFault and MacThrustRamp are programs for modeling faults.

    6. Software and High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational physics, computer science, applied mathematics, statistics and the ... a fully operational supercomputing environment Providing Current Capability Scientific ...

    7. Overview of the Defense Programs Research and Technology Development Program for Fiscal Year 1993

      SciTech Connect (OSTI)

      Not Available

      1993-09-30

      This documents presents a programmatic overview and program element plan summaries for conceptual design and assessment; physics; computation and modeling; system engineering science and technology; electronics, photonics, sensors, and mechanical components; chemistry and materials; special nuclear materials, tritium, and explosives.

    8. ELECTRONIC DIGITAL COMPUTER

      DOE Patents [OSTI]

      Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.

      1957-10-01

      The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.

    9. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    10. PROGRAM ABSTRACTS

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      & DEVELOPMENT: PROGRAM ABSTRACTS Energy Efficiency and Renewable Energy Office of Transportation Technologies Office of Advanced Automotive Technologies Catalyst Layer Bipolar Plate Electrode Backing Layers INTEGRATED SYSTEMS Polymer Electrolyte Membrane Fuel Cells Fuel Cell Stack PEM STACK & STACK COMPONENTS Fuel Cell Stack System Air Management System Fuel Processor System For Transportation June 1999 ENERGY EFFICIENCY AND RENEWABLE ENERGY OFFICE OF TRANSPORTATION TECHNOLOGIES OFFICE

    11. Advanced Scientific Computing Research Network Requirements

      SciTech Connect (OSTI)

      Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

      2013-03-08

      The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

    12. Indirection and computer security.

      SciTech Connect (OSTI)

      Berg, Michael J.

      2011-09-01

      The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyze common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions.

    13. Student science enrichment training program

      SciTech Connect (OSTI)

      Sandhu, S.S.

      1994-08-01

      This is a report on the Student Science Enrichment Training Program, with special emphasis on chemical and computer science fields. The residential summer session was held at the campus of Claflin College, Orangeburg, SC, for six weeks during 1993 summer, to run concomitantly with the college`s summer school. Fifty participants selected for this program, included high school sophomores, juniors and seniors. The students came from rural South Carolina and adjoining states which, presently, have limited science and computer science facilities. The program focused on high ability minority students, with high potential for science engineering and mathematical careers. The major objective was to increase the pool of well qualified college entering minority students who would elect to go into science, engineering and mathematical careers. The Division of Natural Sciences and Mathematics and engineering at Claflin College received major benefits from this program as it helped them to expand the Departments of Chemistry, Engineering, Mathematics and Computer Science as a result of additional enrollment. It also established an expanded pool of well qualified minority science and mathematics graduates, which were recruited by the federal agencies and private corporations, visiting Claflin College Campus. Department of Energy`s relationship with Claflin College increased the public awareness of energy related job opportunities in the public and private sectors.

    14. Parallel programming with PCN. Revision 1

      SciTech Connect (OSTI)

      Foster, I.; Tuecke, S.

      1991-12-01

      PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

    15. Certification of computer professionals: A good idea?

      SciTech Connect (OSTI)

      Boggess, G.

      1994-12-31

      In the early stages of computing there was little understanding or attention paid to the ethical responsibilities of professionals. Compainies routinely put secretaries and music majors through 30 hours of video training and turned them loose on data processing projects. As the nature of the computing task changed, these same practices were followed and the trainees were set loose on life-critical software development projects. The enormous risks of using programmers with limited training has been by the GAO report on the BSY-2 program.

    16. Cheaper Adjoints by Reversing Address Computations

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Hascoët, L.; Utke, J.; Naumann, U.

      2008-01-01

      The reverse mode of automatic differentiation is widely used in science and engineering. A severe bottleneck for the performance of the reverse mode, however, is the necessity to recover certain intermediate values of the program in reverse order. Among these values are computed addresses, which traditionally are recovered through forward recomputation and storage in memory. We propose an alternative approach for recovery that uses inverse computation based on dependency information. Address storage constitutes a significant portion of the overall storage requirements. An example illustrates substantial gains that the proposed approach yields, and we show use cases in practical applications.

    17. Reconnection methods for an arbitrary polyhedral computational grid

      SciTech Connect (OSTI)

      Rasskazova, V.V.; Sofronov, I.D.; Shaporenko, A.N.; Burton, D.E.; Miller, D.S.

      1996-08-01

      The paper suggests a method for local reconstructions of a 3D irregular computational grid and the algorithm of its program implementation. Two grid reconstruction operations are used as basic: paste of two cells having a common face and cut of a certain cell into two by a given plane. This paper presents criteria to use one or another operation, the criteria are analyzed. A program for local reconstruction of a 3D irregular grid is used to conduct two test computations and the computed results are given.

    18. Hybrid Parallel Programming with MPI and Unified Parallel C | Argonne

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Leadership Computing Facility Parallel Programming with MPI and Unified Parallel C Authors: Dinan, J., Balaji, P., Lusk, E., Sadayappan, P., Thakur, R. The Message Passing Interface (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node. Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) are growing in popularity

    19. ALCF Data Science Program: Proposal Instructions | Argonne Leadership

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Facility ALCC Program Director's Discretionary (DD) Program ALCF Data Science Program ALCF Data Science Program: Proposal Instructions INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations ALCF Data Science Program: Proposal Instructions The ADSP Proposal Process The ADSP projects will be categorized as either "data science projects", which will have a specific science goal, or "software

    20. Sandia National Laboratories: Advanced Simulation and Computing: Contact

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ASC Contact ASC Sandia ASC Program Contacts Program Director Bruce Hendrickson bahendr@sandia.gov Program Manager David Womble dewombl@sandia.gov Integrated Codes Lead Scott Hutchinson sahutch@sandia.gov Physics & Engineering Modeling Lead Jim Redmond jmredmo@sandia.gov Verification & Validation Lead Curt Nilsen canilse@sandia.gov Computational Systems & Software Engineering Lead Ken Alvin kfalvin@sandia.gov Facilities Operations & User Support Lead Tom Klitsner

    1. Apply for the Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      How to Apply Apply for the Parallel Computing Summer Research Internship Creating next-generation leaders in HPC research and applications development Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nicole Aguilar Garcia (505) 665-3048 Email Current application deadline is February 5, 2016 with notification by early March 2016. Who can apply? Upper division undergraduate students and early graduate

    2. Computer System, Cluster and Networking Summer Institute (CSCNSI)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      LaboratoryNational Security Education Center Menu About Seminar Series Summer Schools Workshops Viz Collab IS&T Projects NSEC » Information Science and Technology Institute (ISTI) » Summer School Programs » CSCNSI Computer System, Cluster and Networking Summer Institute Emphasizes practical skills development Contacts Program Lead Carolyn Connor (505) 665-9891 Email Professional Staff Assistant Nicole Aguilar Garcia (505) 665-3048 Email Technical enrichment program for third-year

    3. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Researchers gain deep mechanistic view of a critical ATP-driven calcium pump. Read More Visualization of primate tooth ALCF's new data science program targets "big data" problems ...

    4. Foundational Tools for Petascale Computing

      SciTech Connect (OSTI)

      Miller, Barton

      2014-05-19

      The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “High-Performance Energy Applications and Systems”, SC0004061/FG02-10ER25972, UW PRJ36WV.

    5. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    6. Quality Program

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      QP001 Revision 0 Effective October 15, 2001 QUALITY PROGRAM Prepared by Electric Transportation Applications Prepared by: _______________________________ Date:__________ Jude M. Clark Approved by: _______________________________________________ Date: ______________ Donald B. Karner Procedure ETA-QP001 Revision 0 2 2001 Electric Transportation Applications All Rights Reserved TABLE OF CONTENTS 1.0 Objectives 3 2.0 Scope 3 3.0 Documentation 3 4.0 Prerequisites 4 5.0 Exclusions 5 6.0 Quality

    7. Program Development

      SciTech Connect (OSTI)

      Atencio, Julian J.

      2014-05-01

      This presentation covers how to go about developing a human reliability program. In particular, it touches on conceptual thinking, raising awareness in an organization, the actions that go into developing a plan. It emphasizes evaluating all positions, eliminating positions from the pool due to mitigating factors, and keeping the process transparent. It lists components of the process and objectives in process development. It also touches on the role of leadership and the necessity for audit.

    8. Convergence: Computing and communications

      SciTech Connect (OSTI)

      Catlett, C.

      1996-12-31

      This paper highlights the operations of the National Center for Supercomputing Applications (NCSA). NCSA is developing and implementing a national strategy to create, use, and transfer advanced computing and communication tools and information technologies for science, engineering, education, and business. The primary focus of the presentation is historical and expected growth in the computing capacity, personal computer performance, and Internet and WorldWide Web sites. Data are presented to show changes over the past 10 to 20 years in these areas. 5 figs., 4 tabs.

    9. DOE High Performance Computing for Manufacturing (HPC4Mfg) Program...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      HPC systems, but also for experts in the use of these systems to solve complex problems." ... laboratories will play a key role in solving manufacturing challenges and ...

    10. DOE Office of Science Computing Facility Operational Assessment Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Energy Office of Environmental Management 2015 Year in Review DOE Office of Environmental Management 2015 Year in Review December 23, 2015 - 10:00am Addthis DOE Office of Environmental Management 2015 Year in Review Version Available for Download "I am proud of all of the work we in EM-both at headquarters and in the field-have accomplished this year. While facing the most complex cleanup challenges, measurable progress was made in 2015-a testament to our skilled workforce. The

    11. Eight Projects Selected for NERSC's Data Intensive Computing Pilot Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2/2012 - New Hampshire, United States The tutorial video helps with learning the new charting and file tools, this is super, keep it coming! I have no suggestions at this time. 02/23/2012 - United States I use the NG and CL weekly data and find the Beta to be very user friendly. 02/20/2012 - Texas, United States Great job! I would suggest adding an option to make the units in the y-axis a "per day" figure in the monthly view. Other suggestions: (1)make us of the second vertical axis if

    12. Data Intensive Computing Pilot Program 2012/2013 Awards

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Use of the X-ray free-electron laser is necessary for this experiment since it allows to outrun the radiation damage that would normally accrue to the catalytic cluster at a ...

    13. ASC Program Elements | National Nuclear Security Administration | (NNSA)

      National Nuclear Security Administration (NNSA)

      Computing ASC Program Elements Established in 1995, the Advanced Simulation and Computing (ASC) Program supports the Department of Energy's National Nuclear Security Administration (NNSA) Defense Programs' shift in emphasis from test-based confidence to simulation-based confidence. Under ASC, scientific simulation capabilities are developed to analyze and predict the performance, safety, and reliability of nuclear weapons and to certify their functionality. ASC integrates the work of three

    14. Computation Directorate 2007 Annual Report

      SciTech Connect (OSTI)

      Henson, V E; Guse, J A

      2008-03-06

      extremely intricate, detailed computational simulation that we can test our theories, and simulating weather and climate over the entire globe requires the most massive high-performance computers that exist. Such extreme problems are found in numerous laboratory missions, including astrophysics, weapons programs, materials science, and earth science.

    15. Computing and Computational Sciences Directorate - National Center for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages

      Computational Sciences Search Go! ORNL * Find People * Contact * Site Index * Comments Home Divisions and Centers Computational Sciences and Engineering Computer Science and Mathematics Information Technology Joint Institute for Computational Sciences National Center for Computational Sciences Supercomputing Projects Awards Employment Opportunities Student Opportunities About Us Organization In the News Contact Us Visitor Information ORNL Research Areas Neutron Sciences Biological Systems

    16. Maryland Efficiency Program Options

      Office of Energy Efficiency and Renewable Energy (EERE)

      Maryland Efficiency Program Options, from the Tool Kit Framework: Small Town University Energy Program (STEP).

    17. STEP Program Benchmark Report

      Broader source: Energy.gov [DOE]

      STEP Program Benchmark Report, from the Tool Kit Framework: Small Town University Energy Program (STEP).

    18. AutoPIPE Extract Program

      Energy Science and Technology Software Center (OSTI)

      1993-07-02

      The AutoPIPE Extract Program (APEX) provides an interface between CADAM (Computer Aided Design and Manufacturing) Release 21 drafting software and the AutoPIPE, Version 4.4, piping analysis program. APEX produces the AutoPIPE batch input file that corresponds to the piping shown in a CADAM model. The card image file contains header cards, material cards, and pipe cross section cards as well as tee, bend, valve, and flange cards. Node numbers are automatically generated. APEX processes straightmore » pipe, branch lines and ring geometries.« less

    19. On Undecidability Aspects of Resilient Computations and Implications to Exascale

      SciTech Connect (OSTI)

      Rao, Nageswara S

      2014-01-01

      Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classes of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.

    20. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      a n n u a l r e p o r t 2 0 1 2 Argonne Leadership Computing Facility Director's Message .............................................................................................................................1 About ALCF ......................................................................................................................................... 2 IntroDuCIng MIrA Introducing Mira

    1. Quantum steady computation

      SciTech Connect (OSTI)

      Castagnoli, G. )

      1991-08-10

      This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

    2. Cloud computing security.

      SciTech Connect (OSTI)

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    3. Program Evaluation: Program Logic | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Program Logic Program Evaluation: Program Logic Step four will help you develop a logical model for your program (learn more about the other steps in general program evaluations): What is a Logic Model? Benefits of Using Logic Modeling Pitfalls and How to Avoid Them Steps to Developing a Logic Model What is a Logic Model? Logic modeling is a thought process program evaluators have found to be useful for at least forty years and has become increasingly popular with program managers during the

    4. New TRACC Cluster Computer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      TRACC Cluster Computer With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD 16 core, 2.3 GHz, 32 GB processors. See also Computing Resources.

    5. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Argonne National Laboratory | 9700 South Cass Avenue | Argonne, IL 60439 | www.anl.gov | September 2013 alcf_keyfacts_fs_0913 Key facts about the Argonne Leadership Computing Facility User support and services Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. Catalysts are computational scientist with domain expertise and work directly with project principal investigators to maximize discovery and reduce time-to- solution.

    6. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These colors were

    7. Stencil Computation Optimization

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Stencil Computation Optimization and Auto-tuning on State-of-the-Art Multicore Architectures Kaushik Datta ∗† , Mark Murphy † , Vasily Volkov † , Samuel Williams ∗† , Jonathan Carter ∗ , Leonid Oliker ∗† , David Patterson ∗† , John Shalf ∗ , and Katherine Yelick ∗† ∗ CRD/NERSC, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA † Computer Science Division, University of California at Berkeley, Berkeley, CA 94720, USA Abstract Understanding the most

    8. Compute Reservation Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Reservation Request Form Compute Reservation Request Form Users can request a scheduled reservation of machine resources if their jobs have special needs that cannot be accommodated through the regular batch system. A reservation brings some portion of the machine to a specific user or project for an agreed upon duration. Typically this is used for interactive debugging at scale or real time processing linked to some experiment or event. It is not intended to be used to guarantee fast

    9. Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computing Computing Fun fact: Most systems require air conditioning or chilled water to cool super powerful supercomputers, but the Olympus supercomputer at Pacific Northwest National Laboratory is cooled by the location's 65 degree groundwater. Traditional cooling systems could cost up to $61,000 in electricity each year, but this more efficient setup uses 70 percent less energy. | Photo courtesy of PNNL. Fun fact: Most systems require air conditioning or chilled water to cool super powerful

    10. Computation supporting biodefense

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Conference on High-Speed Computing LANL / LLNL / SNL Salishan Lodge, Gleneden Beach, Oregon 24 April 2003 Murray Wolinsky murray@lanl.gov The Role of Computation in Biodefense 1. Biothreat 101 2. Bioinformatics 101 Examples 3. Sequence analysis: mpiBLAST Feng 4. Detection: KPATH Slezak 5. Protein structure: ROSETTA Strauss 6. Real-time epidemiology: EpiSIMS Eubank 7. Forensics: VESPA Myers, Korber 8. Needs System level analytical capabilities Enhanced phylogenetic algorithms Novel

    11. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental health, cleaner energy, and national security. Contact Us Group Leader Carl Gable Deputy Group Leader Gilles Bussod Email Profile pages header Search our Profile pages Hari Viswanathan inspects a microfluidic cell used to study the extraction of hydrocarbon fuels from a complex fracture network. EES-16's Subsurface Flow

    12. Computational Modeling | Bioenergy | NREL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Modeling NREL uses computational modeling to increase the efficiency of biomass conversion by rational design using multiscale modeling, applying theoretical approaches, and testing scientific hypotheses. model of enzymes wrapping on cellulose; colorful circular structures entwined through blue strands Cellulosomes are complexes of protein scaffolds and enzymes that are highly effective in decomposing biomass. This is a snapshot of a coarse-grain model of complex cellulosome

    13. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    14. Powering Research | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      A metal-binding protein designed by the Baker laboratory. Towards Breakthroughs in Protein Structure Calculation and Design David Baker Allocation Program: INCITE Allocation Hours: 120 Million Breakthrough Science At the ALCF, we provide researchers from industry, academia, and government agencies with access to leadership-class supercomputing capabilities and a team of expert computational scientists. This unparalleled combination of resources is enabling breakthroughs in science and

    15. Science at ALCF | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science at ALCF Allocation Program - Any - INCITE ALCC ESP Director's Discretionary Year Year -Year 2008 2009 2010 2011 2012 2013 2014 2015 2016 Research Domain - Any - Physics Mathematics Computer Science Chemistry Earth Science Energy Technologies Materials Science Engineering Biological Sciences Apply sort descending Advanced Electronic Structure Methods for Heterogeneous Catalysis and Separation of Heavy Metals Mark Gordon, Iowa State University ESP 2015 Chemistry Weak ignition behind a

    16. Determining Memory Use | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Determining Memory Use Determining the amount of memory available during the execution of the program requires the use of

    17. Data and Networking | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Data Tools & Models Glossary › FAQS › Overview Data Tools Time Series Models & Documentation EIA has a vast amount of data, reports, forecasts, analytical content, and documentation to assist researchers working on energy topics. For users eager to dive deeper into our content, we have assembled tools to customize searches, view specific data sets, study detailed documentation, and access time-series data. Application Programming Interface (API): The API allows computers to more

    18. Science at ALCF | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science at ALCF Allocation Program - Any - INCITE ALCC ESP Director's Discretionary Year Year -Year 2008 2009 2010 2011 2012 2013 2014 2015 2016 Research Domain - Any - Physics Mathematics Computer Science Chemistry Earth Science Energy Technologies Materials Science Engineering Biological Sciences Apply sort descending An example of a Category 5 hurricane simulated by the CESM at 13 km resolution Accelerated Climate Modeling for Energy Mark Taylor, Sandia National Laboratories INCITE 2016 100

    19. (U) Computation acceleration using dynamic memory

      SciTech Connect (OSTI)

      Hakel, Peter

      2014-10-24

      Many computational applications require the repeated use of quantities, whose calculations can be expensive. In order to speed up the overall execution of the program, it is often advantageous to replace computation with extra memory usage. In this approach, computed values are stored and then, when they are needed again, they are quickly retrieved from memory rather than being calculated again at great cost. Sometimes, however, the precise amount of memory needed to store such a collection is not known in advance, and only emerges in the course of running the calculation. One problem accompanying such a situation is wasted memory space in overdimensioned (and possibly sparse) arrays. Another issue is the overhead of copying existing values to a new, larger memory space, if the original allocation turns out to be insufficient. In order to handle these runtime problems, the programmer therefore has the extra task of addressing them in the code.

    20. Intergovernmental Programs | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs The Office of Environmental Management supports, by means of grants and cooperative agreements, a number of

    1. Process for selecting NEAMS applications for access to Idaho National Laboratory high performance computing resources

      SciTech Connect (OSTI)

      Michael Pernice

      2010-09-01

      INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

    2. DHC: a diurnal heat capacity program for microcomputers

      SciTech Connect (OSTI)

      Balcomb, J.D.

      1985-01-01

      A computer program has been developed that can predict the temperature swing in direct gain passive solar buildings. The diurnal heat capacity (DHC) program calculates the DHC for any combination of homogeneous or layered surfaces using closed-form harmonic solutions to the heat diffusion equation. The theory is described, a Basic program listing is provided, and an example solution printout is given.

    3. Program Update

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      April-June 2014 issue of the U.S. Department of Energy (DOE) Offce of Legacy Management (LM) Program Update. This publication is designed to provide a status of activities within LM. Please direct all comments and inquiries to lm@hq.doe.gov. April-June 2014 Visit us at http://energy.gov/lm/ Goal 4 Optimizing the Use of Federal Lands Through Disposition The foundation of the U.S. Department of Energy (DOE) Offce of Legacy Manage- ment's (LM) Goal 4, "Optimize the use of land and

    4. Program Update

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      5 issue of the U.S. Department of Energy (DOE) Offce of Legacy Management (LM) Program Update. This publication is designed to provide a status of activities within LM. Please direct all comments and inquiries to lm@hq.doe.gov. January-March 2015 Visit us at http://energy.gov/lm/ Goal 4 Successful Transition from Mound Site to Mound Business Park Continues The Mound Business Park attracts a variety of businesses to the former U.S. Department of Energy (DOE) Mound, Ohio, Site in Miamisburg. In

    5. Managing turbine-generator outages by computer

      SciTech Connect (OSTI)

      Reinhart, E.R. [Reinhart and Associates, Inc., Austin, TX (United States)

      1997-09-01

      This article describes software being developed to address the need for computerized planning and documentation programs that can help manage outages. Downsized power-utility companies and the growing demand for independent, competitive engineering and maintenance services have created a need for a computer-assisted planning and technical-direction program for turbine-generator outages. To meet this need, a software tool is now under development that can run on a desktop or laptop personal computer to assist utility personnel and technical directors in outage planning. Total Outage Planning Software (TOPS), which runs on Windows, takes advantage of the mass data storage available with compact-disc technology by archiving the complete outage documentation on CD. Previous outage records can then be indexed, searched, and viewed on a computer with the click of a mouse. Critical-path schedules, parts lists, parts order tracking, work instructions and procedures, custom data sheets, and progress reports can be generated by computer on-site during an outage.

    6. Computing and Computational Sciences Directorate - Joint Institute for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences Joint Institute for Computational Sciences To help realize the full potential of new-generation computers for advancing scientific discovery, the University of Tennessee (UT) and Oak Ridge National Laboratory (ORNL) have created the Joint Institute for Computational Sciences (JICS). JICS combines the experience and expertise in theoretical and computational science and engineering, computer science, and mathematics in these two institutions and focuses these skills on

    7. in High Performance Computing Computer System, Cluster, and Networking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

    8. The HILDA program

      SciTech Connect (OSTI)

      Close, E.; Fong, C; Lee, E.

      1991-10-30

      Although this report is called a program document, it is not simply a user`s guide to running HILDA nor is it a programmer`s guide to maintaining and updating HILDA. It is a guide to HILDA as a program and as a model for designing and costing a heavy ion fusion (HIF) driver. HILDA represents the work and ideas of many people; as does the model upon which it is based. The project was initiated by Denis Keefe, the leader of the LBL HIFAR project. He suggested the name HILDA, which is an acronym for Heavy Ion Linac Driver Analysis. The conventions and style of development of the HILDA program are based on the original goals. It was desired to have a computer program that could estimate the cost and find an optimal design for Heavy Ion Fusion induction linac drivers. This program should model near-term machines as well as fullscale drivers. The code objectives were: (1) A relatively detailed, but easily understood model. (2) Modular, structured code to facilitate making changes in the model, the analysis reports, and the user interface. (3) Documentation that defines, and explains the system model, cost algorithm, program structure, and generated reports. With this tool a knowledgeable user would be able to examine an ensemble of drivers and find the driver that is minimum in cost, subject to stated constraints. This document contains a report section that describes how to use HILDA, some simple illustrative examples, and descriptions of the models used for the beam dynamics and component design. Associated with this document, as files on floppy disks, are the complete HILDA source code, much information that is needed to maintain and update HILDA, and some complete examples. These examples illustrate that the present version of HILDA can generate much useful information about the design of a HIF driver. They also serve as guides to what features would be useful to include in future updates. The HPD represents the current state of development of this project.

    9. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    10. Geothermal Technologies Program Overview - Peer Review Program

      SciTech Connect (OSTI)

      Milliken, JoAnn

      2011-06-06

      This Geothermal Technologies Program presentation was delivered on June 6, 2011 at a Program Peer Review meeting. It contains annual budget, Recovery Act, funding opportunities, upcoming program activities, and more.

    11. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los Alamos National Laboratory for the best of both. No place on Earth pursues a broader array of world-class scientific endeavors. Computer, Computational, and Statistical Sciences (CCS)» High Performance Computing (HPC)» Extreme Scale Computing, Co-design» supercomputing into the future Overview Los Alamos Asteroid Killer

    12. computers | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      computers NNSA Announces Procurement of Penguin Computing Clusters to Support Stockpile Stewardship at National Labs The National Nuclear Security Administration's (NNSA's) Lawrence Livermore National Laboratory today announced the awarding of a subcontract to Penguin Computing - a leading developer of high-performance Linux cluster computing systems based in Silicon Valley - to bolster computing for stockpile... Sandia donates 242 computers to northern California schools Sandia National

    13. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los ...

    14. Computer simulation | Open Energy Information

      Open Energy Info (EERE)

      Computer simulation Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Computer simulation Author wikipedia Published wikipedia, 2013 DOI Not Provided...

    15. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse ...

    16. NREL: Computational Science Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      high-performance computing, computational science, applied mathematics, scientific data management, visualization, and informatics. NREL is home to the largest high performance...

    17. Fermilab | Science at Fermilab | Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Computing is indispensable to science at Fermilab. High-energy physics experiments generate an astounding amount of data that physicists need to store, analyze and ...

    18. Michael Levitt and Computational Biology

      Office of Scientific and Technical Information (OSTI)

      ... Additional Web Pages: 3 Scientists Win Chemistry Nobel for Complex Computer Modeling, npr Stanford's Nobel Chemistry Prize Honors Computer Science, San Jose Mercury News Without ...

    19. Human-computer interface

      DOE Patents [OSTI]

      Anderson, Thomas G.

      2004-12-21

      The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

    20. HEATKAU Program.

      Energy Science and Technology Software Center (OSTI)

      2013-07-24

      Version 00 Calculations of the decay heat is of great importance for the design of the shielding of discharged fuel, the design and transport of fuel-storage flasks and the management of the resulting radioactive waste. These are relevant to safety and have large economic and legislative consequences. In the HEATKAU code, a new approach has been proposed to evaluate the decay heat power after a fission burst of a fissile nuclide for short cooling time.more » This method is based on the numerical solution of coupled linear differential equations that describe decays and buildups of the minor fission products (MFPs) nuclides. HEATKAU is written entirely in the MATLAB programming environment. The MATLAB data can be stored in a standard, fast and easy-access, platform- independent binary format which is easy to visualize.« less

    1. Machinist Pipeline/Apprentice Program Program Description

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      cost effective than previous time-based programs Moves apprentices to journeyworker status more quickly Program Coordinator: Heidi Hahn Email: hahn@lanl.gov Phone number:...

    2. EECBG Financing Program Annual ...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Additional cost share required to administer the program Process Metrics-Underlying ... administering the Program and carrying out underlying activities supported by the Program. ...

    3. Existing Facilities Rebate Program

      Broader source: Energy.gov [DOE]

      The NYSERDA Existing Facilities program merges the former Peak Load Reduction and Enhanced Commercial and Industrial Performance programs. The new program offers a broad array of different...

    4. Argonne's Laboratory computing resource center : 2006 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

      2007-05-31

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national

    5. Information hiding in parallel programs

      SciTech Connect (OSTI)

      Foster, I.

      1992-01-30

      A fundamental principle in program design is to isolate difficult or changeable design decisions. Application of this principle to parallel programs requires identification of decisions that are difficult or subject to change, and the development of techniques for hiding these decisions. We experiment with three complex applications, and identify mapping, communication, and scheduling as areas in which decisions are particularly problematic. We develop computational abstractions that hide such decisions, and show that these abstractions can be used to develop elegant solutions to programming problems. In particular, they allow us to encode common structures, such as transforms, reductions, and meshes, as software cells and templates that can reused in different applications. An important characteristic of these structures is that they do not incorporate mapping, communication, or scheduling decisions: these aspects of the design are specified separately, when composing existing structures to form applications. This separation of concerns allows the same cells and templates to be reused in different contexts.

    6. A PVM Executive Program for Use with RELAP5-3D

      SciTech Connect (OSTI)

      Weaver, Walter Leslie; Tomlinson, E. T.; Aumiller, D. L.

      2002-04-01

      A PVM executive program has been developed for use with the RELAP5-3D computer program. The PVM executive allows RELAP5-3D to be coupled with any number of other computer programs to perform integrated analyses of nuclear power reactor systems and related experimental facilities. The executive program manages all phases of a coupled computation. It starts up and configures a virtual machine, spawns all of the coupled processes, coordinates the time step size between the coupled codes, manages the production of printed and plotable output, and shuts the virtual machine down at the end of the computation. The executive program also monitors that status of the coupled computation, repeating time steps as needed and terminating a coupled computation gracefully if one of the coupled processes is terminated by the computational node on which it is executing.

    7. Programming Challenges Abstracts | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Abstracts and Biographies Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC)

    8. Programming Challenges Presentations | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Presentations Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community

    9. Better Buildings Neighborhood Program Business Models Guide: Program Administrator Description

      Broader source: Energy.gov [DOE]

      Better Buildings Neighborhood Program Business Models Guide: Program Administrator Business Models, Program Administrator Description.

    10. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2015-01-27

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    11. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2014-12-30

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    12. Vehicle Technologies Office Merit Review 2015: Integrated Computational Materials Engineering Approach to Development of Lightweight 3GAHSS Vehicle Assembly

      Broader source: Energy.gov [DOE]

      Presentation given by USAMP at 2015 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about integrated computational materials...

    13. Vehicle Technologies Office Merit Review 2014: Integrated Computational Materials Engineering Approach to Development of Lightweight 3GAHSS Vehicle Assembly

      Broader source: Energy.gov [DOE]

      Presentation given by USAMP at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about integrated computational materials...

    14. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

      DOE Patents [OSTI]

      Gschwind, Michael K

      2013-04-16

      Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

    15. MHD computations for stellarators

      SciTech Connect (OSTI)

      Johnson, J.L.

      1985-12-01

      Considerable progress has been made in the development of computational techniques for studying the magnetohydrodynamic equilibrium and stability properties of three-dimensional configurations. Several different approaches have evolved to the point where comparison of results determined with different techniques shows good agreement. 55 refs., 7 figs.

    16. Computer Security Risk Assessment

      Energy Science and Technology Software Center (OSTI)

      1992-02-11

      LAVA/CS (LAVA for Computer Security) is an application of the Los Alamos Vulnerability Assessment (LAVA) methodology specific to computer and information security. The software serves as a generic tool for identifying vulnerabilities in computer and information security safeguards systems. Although it does not perform a full risk assessment, the results from its analysis may provide valuable insights into security problems. LAVA/CS assumes that the system is exposed to both natural and environmental hazards and tomore » deliberate malevolent actions by either insiders or outsiders. The user in the process of answering the LAVA/CS questionnaire identifies missing safeguards in 34 areas ranging from password management to personnel security and internal audit practices. Specific safeguards protecting a generic set of assets (or targets) from a generic set of threats (or adversaries) are considered. There are four generic assets: the facility, the organization''s environment; the hardware, all computer-related hardware; the software, the information in machine-readable form stored both on-line or on transportable media; and the documents and displays, the information in human-readable form stored as hard-copy materials (manuals, reports, listings in full-size or microform), film, and screen displays. Two generic threats are considered: natural and environmental hazards, storms, fires, power abnormalities, water and accidental maintenance damage; and on-site human threats, both intentional and accidental acts attributable to a perpetrator on the facility''s premises.« less

    17. HeNCE: A Heterogeneous Network Computing Environment

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; Manchek, Robert; Moore, Keith

      1994-01-01

      Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less

    18. Human Reliability Program Overview

      SciTech Connect (OSTI)

      Bodin, Michael

      2012-09-25

      This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

    19. Vehicle Technologies Program Overview

      SciTech Connect (OSTI)

      none,

      2006-09-05

      Overview of the Vehicle Technologies Program including external assessment and market view; internal assessment, program history and progress; program justification and federal role; program vision, mission, approach, strategic goals, outputs, and outcomes; and performance goals.

    20. Method and apparatus for collaborative use of application program

      DOE Patents [OSTI]

      Dean, Craig D.

      1994-01-01

      Method and apparatus permitting the collaborative use of a computer application program simultaneously by multiple users at different stations. The method is useful with communication protocols having client/server control structures. The method of the invention requires only a sole executing copy of the application program and a sole executing copy of software comprising the invention. Users may collaboratively use a set of application programs by invoking for each desired application program one copy of software comprising the invention.