National Library of Energy BETA

Sample records for abaqus computer program

  1. Visualizing MCNP Tally Segment Geometry and Coupling Results with ABAQUS

    SciTech Connect

    J. R. Parry; J. A. Galbraith

    2007-11-01

    The Advanced Graphite Creep test, AGC-1, is planned for irradiation in the Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant program. The experiment requires very detailed neutronics and thermal hydraulics analyses to show compliance with programmatic and ATR safety requirements. The MCNP model used for the neutronics analysis required hundreds of tally regions to provide the desired detail. A method for visualizing the hundreds of tally region geometries and the tally region results in 3 dimensions has been created to support the AGC-1 irradiation. Additionally, a method was created which would allow ABAQUS to access the results directly for the thermal analysis of the AGC-1 experiment.

  2. INCITE Program | Argonne Leadership Computing Facility

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Science at ALCF Allocation Programs INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program Early ...

  3. Developing an Abaqus *HYPERFOAM Model for M9747 (4003047) Cellular Silicone Foam

    SciTech Connect

    Siranosian, Antranik A.; Stevens, R. Robert

    2012-04-26

    This report documents work done to develop an Abaqus *HYPERFOAM hyperelastic model for M9747 (4003047) cellular silicone foam for use in quasi-static analyses at ambient temperature. Experimental data, from acceptance tests for 'Pad A' conducted at the Kansas City Plant (KCP), was used to calibrate the model. The data includes gap (relative displacement) and load measurements from three locations on the pad. Thirteen sets of data, from pads with different serial numbers, were provided. The thirty-nine gap-load curves were extracted from the thirteen supplied Excel spreadsheets and analyzed, and from those thirty-nine one set of data, representing a qualitative mean, was chosen to calibrate the model. The data was converted from gap and load to nominal (engineering) strain and nominal stress in order to implement it in Abaqus. Strain computations required initial pad thickness estimates. An Abaqus model of a right-circular cylinder was used to evaluate and calibrate the *HYPERFOAM model.

  4. Radiological Safety Analysis Computer Program

    Energy Science and Technology Software Center

    2001-08-28

    RSAC-6 is the latest version of the RSAC program. It calculates the consequences of a release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory; decay and in-grow the inventory during transport through processes, facilities, and the environment; model the downwind dispersion of the activity; and calculate doses to downwind individuals. Internal dose from the inhalation and ingestion pathways is calculated. External dose from ground surface andmore » plume gamma pathways is calculated. New and exciting updates to the program include the ability to evaluate a release to an enclosed room, resuspension of deposited activity and evaluation of a release up to 1 meter from the release point. Enhanced tools are included for dry deposition, building wake, occupancy factors, respirable fraction, AMAD adjustment, updated and enhanced radionuclide inventory and inclusion of the dose-conversion factors from FOR 11 and 12.« less

  5. INCITE Program | Argonne Leadership Computing Facility

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    INCITE Program Innovative and Novel Computational Impact on Theory and Experiment (INCITE) Program The INCITE program provides allocations to computationally intensive, large-scale research projects that aim to address "grand challenges" in science and engineering. The program conducts a two-part review of all proposals: a peer review by an international panel of experts and a computational-readiness review. The annual call for proposals is issued in April and the allocations are

  6. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  7. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  8. Enhancing the ABAQUS thermomechanics code to simulate multipellet steady and transient LWR fuel rod behavior

    SciTech Connect

    R. L. Williamson

    2011-08-01

    A powerful multidimensional fuels performance analysis capability, applicable to both steady and transient fuel behavior, is developed based on enhancements to the commercially available ABAQUS general-purpose thermomechanics code. Enhanced capabilities are described, including: UO2 temperature and burnup dependent thermal properties, solid and gaseous fission product swelling, fuel densification, fission gas release, cladding thermal and irradiation creep, cladding irradiation growth, gap heat transfer, and gap/plenum gas behavior during irradiation. This new capability is demonstrated using a 2D axisymmetric analysis of the upper section of a simplified multipellet fuel rod, during both steady and transient operation. Comparisons are made between discrete and smeared-pellet simulations. Computational results demonstrate the importance of a multidimensional, multipellet, fully-coupled thermomechanical approach. Interestingly, many of the inherent deficiencies in existing fuel performance codes (e.g., 1D thermomechanics, loose thermomechanical coupling, separate steady and transient analysis, cumbersome pre- and post-processing) are, in fact, ABAQUS strengths.

  9. Enhancing the ABAQUS Thermomechanics Code to Simulate Steady and Transient Fuel Rod Behavior

    SciTech Connect

    R. L. Williamson; D. A. Knoll

    2009-09-01

    A powerful multidimensional fuels performance capability, applicable to both steady and transient fuel behavior, is developed based on enhancements to the commercially available ABAQUS general-purpose thermomechanics code. Enhanced capabilities are described, including: UO2 temperature and burnup dependent thermal properties, solid and gaseous fission product swelling, fuel densification, fission gas release, cladding thermal and irradiation creep, cladding irradiation growth , gap heat transfer, and gap/plenum gas behavior during irradiation. The various modeling capabilities are demonstrated using a 2D axisymmetric analysis of the upper section of a simplified multi-pellet fuel rod, during both steady and transient operation. Computational results demonstrate the importance of a multidimensional fully-coupled thermomechanics treatment. Interestingly, many of the inherent deficiencies in existing fuel performance codes (e.g., 1D thermomechanics, loose thermo-mechanical coupling, separate steady and transient analysis, cumbersome pre- and post-processing) are, in fact, ABAQUS strengths.

  10. Programs | Argonne Leadership Computing Facility

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Early Science Program INCITE Program ALCC Program Director's Discretionary (DD) Program ALCF Data Science Program INCITE 2016 Projects ALCC 2016-2017 Projects ADSP projects Theta ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Featured Science Turbulent flow going through a multi-hole coupon geometry Large Eddy Simulations of Combustor Liner Flows Anne Dord Allocation Program: INCITE Allocation Hours: 100 Million Addressing Challenges As a DOE Office of

  11. Director's Discretionary (DD) Program | Argonne Leadership Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Facility INCITE Program ALCC Program Director's Discretionary (DD) Program ALCF Data Science Program INCITE 2016 Projects ALCC 2016-2017 Projects ADSP projects Theta ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Director's Discretionary (DD) Program The ALCF's DD program provides "start up" awards to researchers working toward an INCITE or ALCC allocation to help them achieve computational readiness. Projects must demonstrate a need for

  12. Advanced Simulation and Computing Program

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    The SSP mission is to analyze and predict the performance, safety, and reliability of nuclear weapons and certify their functionality. ASC works in partnership with computer ...

  13. ALCC Program | Argonne Leadership Computing Facility

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    INCITE 2016 Projects ALCC 2016-2017 Projects ADSP projects Theta ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations ALCC Program ASCR Leadership Computing Challenge (ALCC) Program The ALCC program allocates resources to projects with an emphasis on high-risk, high-payoff simulations in areas directly related to the DOE mission and for broadening the community of researchers capable of using leadership computing resources. The DOE conducts a peer review of all

  14. ADP computer security classification program

    SciTech Connect

    Augustson, S.J.

    1984-01-01

    CG-ADP-1, the Automatic Data Processing Security Classification Guide, provides for classification guidance (for security information) concerning the protection of Department of Energy (DOE) and DOE contractor Automatic Data Processing (ADP) systems which handle classified information. Within the DOE, ADP facilities that process classified information provide potentially lucrative targets for compromise. In conjunction with the security measures required by DOE regulations, necessary precautions must be taken to protect details of those ADP security measures which could aid in their own subversion. Accordingly, the basic principle underlying ADP security classification policy is to protect information which could be of significant assistance in gaining unauthorized access to classified information being processed at an ADP facility. Given this policy, classification topics and guidelines are approved for implementation. The basic program guide, CG-ADP-1 is broad in scope and based upon it, more detailed local guides are sometimes developed and approved for specific sites. Classification topics are provided for system features, system and security management, and passwords. Site-specific topics can be addressed in local guides if needed.

  15. 2014 call for NERSC's Data Intensive Computing Pilot Program...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    NERSC's Data Intensive Computing Pilot Program 2014 call for NERSC's Data Intensive Computing Pilot Program Due December 10 November 18, 2013 by Francesca Verdier (0 Comments)...

  16. Finite Volume Based Computer Program for Ground Source Heat Pump...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Finite Volume Based Computer Program for Ground Source Heat Pump Systems Finite Volume Based Computer Program for Ground Source Heat Pump Systems Project objective: Create a new ...

  17. ORISE Resources: Equal Access Initiative Computer Grants Program

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Equal Access Initiative Computer Grants Program The Equal Access Initiative Computer Grants Program is sponsored by the National Minority AIDS Council (NMAC) and the National...

  18. Application of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the...

  19. The Computational Physics Program of the national MFE Computer Center

    SciTech Connect

    Mirin, A.A.

    1989-01-01

    Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.

  20. Refurbishment program of HANARO control computer system

    SciTech Connect

    Kim, H. K.; Choe, Y. S.; Lee, M. W.; Doo, S. K.; Jung, H. S.

    2012-07-01

    HANARO, an open-tank-in-pool type research reactor with 30 MW thermal power, achieved its first criticality in 1995. The programmable controller system MLC (Multi Loop Controller) manufactured by MOORE has been used to control and regulate HANARO since 1995. We made a plan to replace the control computer because the system supplier no longer provided technical support and thus no spare parts were available. Aged and obsolete equipment and the shortage of spare parts supply could have caused great problems. The first consideration for a replacement of the control computer dates back to 2007. The supplier did not produce the components of MLC so that this system would no longer be guaranteed. We established the upgrade and refurbishment program in 2009 so as to keep HANARO up to date in terms of safety. We designed the new control computer system that would replace MLC. The new computer system is HCCS (HANARO Control Computer System). The refurbishing activity is in progress and will finish in 2013. The goal of the refurbishment program is a functional replacement of the reactor control system in consideration of suitable interfaces, compliance with no special outage for installation and commissioning, and no change of the well-proved operation philosophy. HCCS is a DCS (Discrete Control System) using PLC manufactured by RTP. To enhance the reliability, we adapt a triple processor system, double I/O system and hot swapping function. This paper describes the refurbishment program of the HANARO control system including the design requirements of HCCS. (authors)

  1. Computer System, Cluster, and Networking Summer Institute Program Description

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    System, Cluster, and Networking Summer Institute Program Description The Computer System, Cluster, and Networking Summer Institute (CSCNSI) is a focused technical enrichment program targeting third-year college undergraduate students currently engaged in a computer science, computer engineering, or similar major. The program emphasizes practical skill development in setting up, configuring, administering, testing, monitoring, and scheduling computer systems, supercomputer clusters, and computer

  2. The computational physics program of the National MFE Computer Center

    SciTech Connect

    Mirin, A.A.

    1988-01-01

    The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generation of supercomputers. The computational physics group is involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to compact toroids. Another major area is the investigation of kinetic instabilities using a 3-D particle code. This work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence are being examined. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers.

  3. The FALSTF last-flight computer program

    SciTech Connect

    Childs, R.L.

    1996-01-01

    FALSTF is a computer program used with the DORT transport code to calculate fluxes and doses at detector points located outside the DORT geometry model. An integral form of the transport equation is solved to obtain the flux at the detector points resulting from the uncollided transport of the emergent particle density within the geometry as calculated by DORT. Both R-Z and R-{Theta} geometries are supported.

  4. The FALSTF last-flight computer program

    SciTech Connect

    Childs, R.L.

    1996-04-01

    FALSTF is a computer program used with the DORT transport code to calculate fluxes and doses at detector points located outside the DORT geometry model. An integral form of the transport equation is solved to obtain the flux at the detector points resulting from the uncollided transport of the emergent particle density within the geometry as calculated by DORT. Both R-Z and R-{theta} geometries are supported.

  5. Computer System, Cluster, and Networking Summer Institute Program...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    is a focused technical enrichment program targeting third-year college undergraduate students currently engaged in a computer science, computer engineering, or similar major. ...

  6. ALCF Data Science Program | Argonne Leadership Computing Facility

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    ALCF Data Science Program The ALCF Data Science Program (ADSP) is targeted at "big data" science problems that require the scale and performance of leadership computing resources. ...

  7. Calibrating the Abaqus Crushable Foam Material Model using UNM Data

    SciTech Connect

    Schembri, Philip E.; Lewis, Matthew W.

    2014-02-27

    Triaxial test data from the University of New Mexico and uniaxial test data from W-14 is used to calibrate the Abaqus crushable foam material model to represent the syntactic foam comprised of APO-BMI matrix and carbon microballoons used in the W76. The material model is an elasto-plasticity model in which the yield strength depends on pressure. Both the elastic properties and the yield stress are estimated by fitting a line to the elastic region of each test response. The model parameters are fit to the data (in a non-rigorous way) to provide both a conservative and not-conservative material model. The model is verified to perform as intended by comparing the values of pressure and shear stress at yield, as well as the shear and volumetric stress-strain response, to the test data.

  8. Intro to computer programming, no computer required! | Argonne...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    ... "Computational thinking requires you to think in abstractions," said Papka, who spoke to computer science and computer-aided design students at Kaneland High School in Maple Park about ...

  9. Early Science Program | Argonne Leadership Computing Facility

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Aurora ESP Call for Proposals Aurora ESP Proposal Instructions INCITE Program ALCC Program Director's Discretionary (DD) Program ALCF Data Science Program INCITE 2016 Projects ALCC 2016-2017 Projects ADSP projects Theta ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Early Science Program Thanks to all Who Submitted Proposals for the Aurora Early Science Program! Call for Proposals for the Aurora Early Science Program is now closed The ALCF will be

  10. Argonne Training Program on Extreme-Scale Computing Scheduled...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    This program provides intensive hands-on training on the key skills, approaches and tools to design, implement, and execute computational science and engineering applications on ...

  11. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  12. A Computer Program For Speciation Calculation.

    Energy Science and Technology Software Center

    1990-11-21

    Version: 00 WHATIF-AQ is part of a family of programs for calculations of geochemistry in the near-field of radioactive waste with temperature gradients.

  13. Parallel Programming with MPI | Argonne Leadership Computing...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Balaji, MCS Rajeev Thakur, MCS Ken Raffenetti, MCS Halim Amer, MCS Event Website: https:www.mcs.anl.gov%7Eraffenetpermalinksargonne16mpi.php The Mathematics and Computer ...

  14. Seventy Years of Computing in the Nuclear Weapons Program

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Seventy Years of Computing in the Nuclear Weapons Program Seventy Years of Computing in the Nuclear Weapons Program WHEN: Jan 13, 2015 7:30 PM - 8:00 PM WHERE: Fuller Lodge Central Avenue, Los Alamos, NM, USA SPEAKER: Bill Archer of the Weapons Physics (ADX) Directorate CONTACT: Bill Archer 505 665 7235 CATEGORY: Science INTERNAL: Calendar Login Event Description Rich history of computing in the Laboratory's weapons program. The talk is free and open to the public and is part of the 2014-15 Los

  15. UFO (UnFold Operator) computer program abstract

    SciTech Connect

    Kissel, L.; Biggs, F.

    1982-11-01

    UFO (UnFold Operator) is an interactive user-oriented computer program designed to solve a wide range of problems commonly encountered in physical measurements. This document provides a summary of the capabilities of version 3A of UFO.

  16. Applicaiton of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop...

  17. Computer programs for multilocus haplotyping of general pedigrees

    SciTech Connect

    Weeks, D.E.; O`Connell, J.R.; Sobel, E.

    1995-06-01

    We have recently developed and implemented three different computer algorithms for accurate haplotyping with large numbers of codominant markers. Each of these algorithms employs likelihood criteria that correctly incorporate all intermarker recombination fractions. The three programs, HAPLO, SIMCROSS, and SIMWALK, are now available for haplotying general pedigrees. The HAPLO program will be distributed as part of the Programs for Pedigree Analysis package by Kenneth Lange. The SIMCROSS and SIMWALK programs are available by anonymous ftp from watson.hgen.pitt.edu. Each program is written in FORTRAN 77 and is distributed as source code. 15 refs.

  18. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  19. A computer program for HVDC converter station RF noise calculations

    SciTech Connect

    Kasten, D.G.; Caldecott, R.; Sebo, S.A. . Dept. of Electrical Engineering); Liu, Y. . Bradley Dept. of Electrical Engineering)

    1994-04-01

    HVDC converter station operations generate radio frequency (RF) electromagnetic (EM) noise which could interfere with adjacent communication and computer equipment, and carrier system operations. A generic Radio Frequency Computer Analysis Program (RAFCAP) for calculating the EM noise generated by valve ignition of a converter station has been developed as part of a larger project. The program calculates RF voltages, currents, complex power, ground level electric field strength and magnetic flux density in and around an HVDC converter station. The program requires the converter station network to be represented by frequency dependent impedance functions. Comparisons of calculated and measured values are given for an actual HVDC station to illustrate the validity of the program. RAFCAP is designed to be used by engineers for the purpose of calculating the RF noise produced by the igniting of HVDC converter valves.

  20. Mira Early Science Program | Argonne Leadership Computing Facility

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Mira Early Science Program The goals of the ALCF-2 Early Science Program (ESP) were to prepare key applications for the architecture and scale of Mira, and to solidify libraries and infrastructure that would pave the way for other future production applications. The 16 Early Science projects are the result of a call for proposals, and were chosen based on computational and scientific reviews. The projects, in addition to promising delivery of exciting new science, are all based on

  1. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    SciTech Connect

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  2. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    SciTech Connect

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  3. Computer programs for eddy-current defect studies

    SciTech Connect

    Pate, J. R.; Dodd, C. V.

    1990-06-01

    Several computer programs to aid in the design of eddy-current tests and probes have been written. The programs, written in Fortran, deal in various ways with the response to defects exhibited by four types of probes: the pancake probe, the reflection probe, the circumferential boreside probe, and the circumferential encircling probe. Programs are included which calculate the impedance or voltage change in a coil due to a defect, which calculate and plot the defect sensitivity factor of a coil, and which invert calculated or experimental readings to obtain the size of a defect. The theory upon which the programs are based is the Burrows point defect theory, and thus the calculations of the programs will be more accurate for small defects. 6 refs., 21 figs.

  4. Application and implementation of transient algorithms in computer programs

    SciTech Connect

    Benson, D.J.

    1985-07-01

    This presentation gives a brief introduction to the nonlinear finite element programs developed at Lawrence Livermore National Laboratory by the Methods Development Group in the Mechanical Engineering Department. The four programs are DYNA3D and DYNA2D, which are explicit hydrocodes, and NIKE3D and NIKE2D, which are implicit programs. The presentation concentrates on DYNA3D with asides about the other programs. During the past year several new features were added to DYNA3D, and major improvements were made in the computational efficiency of the shell and beam elements. Most of these new features and improvements will eventually make their way into the other programs. The emphasis in our computational mechanics effort has always been, and continues to be, efficiency. To get the most out of our supercomputers, all Crays, we have vectorized the programs as much as possible. Several of the more interesting capabilities of DYNA3D will be described and their impact on efficiency will be discussed. Some of the recent work on NIKE3D and NIKE2D will also be presented. In the belief that a single example is worth a thousand equations, we are skipping the theory entirely and going directly to the examples.

  5. Multiple-comparison computer program using the bonferroni t statistic

    SciTech Connect

    Johnson, E. E.

    1980-11-13

    To ascertain the agreement among laboratories, samples from a single batch of material are analyzed by the different laboratories and results are then compared. A graphical format was designed for presenting the results and for showing which laboratories have significantly different results. The appropriate statistic for simultaneously testing the significance of the differences between several means is Bonferroni t. A computer program was written to make the tests between means based on Bonferroni t and also to make multiple comparisons of the standard deviations associated with the means. The program plots the results and indicates means and standard deviations which are significantly different.

  6. Final Report: Center for Programming Models for Scalable Parallel Computing

    SciTech Connect

    Mellor-Crummey, John

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  7. PET computer programs for use with the 88-inch cyclotron

    SciTech Connect

    Gough, R.A.; Chlosta, L.

    1981-06-01

    This report describes in detail several offline programs written for the PET computer which provide an efficient data management system to assist with the operation of the 88-Inch Cyclotron. This function includes the capability to predict settings for all cyclotron and beam line parameters for all beams within the present operating domain of the facility. The establishment of a data base for operational records is also described from which various aspects of the operating history can be projected.

  8. About the ASCR Computer Science Program | U.S. DOE Office of Science (SC)

    Office of Science (SC)

    About the ASCR Computer Science Program Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee

  9. final report for Center for Programming Models for Scalable Parallel Computing

    SciTech Connect

    Johnson, Ralph E

    2013-04-10

    This is the final report of the work on parallel programming patterns that was part of the Center for Programming Models for Scalable Parallel Computing

  10. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    SciTech Connect

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  11. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  12. Viscosity index calculated by program in GW-basic for personal computers

    SciTech Connect

    Anaya, C.; Bermudez, O. )

    1988-12-26

    A computer program has been developed to calculate the viscosity index of oils when viscosities at two temperatures are known.

  13. Computer Science Program | U.S. DOE Office of Science (SC)

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Computer Science Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community

  14. Scientific and Computational Challenges of the Fusion Simulation Program (FSP)

    SciTech Connect

    William M. Tang

    2011-02-09

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  15. Seventy Years of Computing in the Nuclear Weapons Program

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    status of computing and expectations for the near future. Archer earned his doctorate from the University of Oklahoma for research done at LANL on computational quantum chemistry. ...

  16. Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Computing and Storage Requirements Computing and Storage Requirements for FES J. Candy General Atomics, San Diego, CA Presented at DOE Technical Program Review Hilton Washington DC/Rockville Rockville, MD 19-20 March 2013 2 Computing and Storage Requirements Drift waves and tokamak plasma turbulence Role in the context of fusion research * Plasma performance: In tokamak plasmas, performance is limited by turbulent radial transport of both energy and particles. * Gradient-driven: This turbulent

  17. 2014 call for NERSC's Data Intensive Computing Pilot Program Due December

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    10 NERSC's Data Intensive Computing Pilot Program 2014 call for NERSC's Data Intensive Computing Pilot Program Due December 10 November 18, 2013 by Francesca Verdier NERSC's Data Intensive Computing Pilot Program is now open for its second round of allocations to projects in data intensive science. This pilot aims to support and enable scientists to tackle their most demanding data intensive challenges. Selected projects will be piloting new methods and technologies targeting data

  18. DOE Announces $3.8 for High Performance Computing Program | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Energy DOE Announces $3.8 for High Performance Computing Program DOE Announces $3.8 for High Performance Computing Program September 1, 2016 - 3:00pm Addthis AMO Partners Select Thirteen Projects for the High Performance Computing for Manufacturing The Energy Department this week, in partnership with Lawrence Livermore National Laboratory (LLNL), announced $3.8 million to be allocated across 13 projects to use high-performance computing resources at the Department's national laboratories to

  19. An Information Dependant Computer Program for Engine Exhaust Heat Recovery for Heating

    Energy.gov [DOE]

    A computer program was developed to help engineers at rural Alaskan village power plants to quickly evaluate how to use exhaust waste heat from individual diesel power plants.

  20. Princeton graduate student Imène Goumiri creates computer program...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    computer program that helps stabilize fusion plasmas By John Greenwald and Raphael ... a method for limiting instabilities that reduce the performance of fusion plasmas. ...

  1. Advanced Simulation and Computing and Institutional R&D Programs...

    National Nuclear Security Administration (NNSA)

    The ASC Program continually works to meet national needs-economically, efficiently, and within the scope set by Congress-to assure those who provide the resources that their funds ...

  2. The ENERGY-10 design-tool computer program

    SciTech Connect

    Balcomb, J.D.; Crowder, R.S. III.

    1995-11-01

    ENERGY-10 is a PC-based building energy simulation program for smaller commercial and institutional buildings that is specifically designed to evaluate energy-efficient features in the very early stages of the architectural design process. Developed specifically as a design tool, the program makes it easy to evaluate the integration of daylighting, passive solar design, low-energy cooling, and energy-efficient equipment into high-performance buildings. The simulation engines perform whole-building energy analysis for 8760 hours per year including both daylighting and dynamic thermal calculations. The primary target audience for the program is building designers, especially architects, but also includes HVAC engineers, utility officials, and architecture and engineering students and professors.

  3. Workshop on programming languages for high performance computing (HPCWPL): final report.

    SciTech Connect

    Murphy, Richard C.

    2007-05-01

    This report summarizes the deliberations and conclusions of the Workshop on Programming Languages for High Performance Computing (HPCWPL) held at the Sandia CSRI facility in Albuquerque, NM on December 12-13, 2006.

  4. Finite Volume Based Computer Program for Ground Source Heat Pump Systems |

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Department of Energy Finite Volume Based Computer Program for Ground Source Heat Pump Systems Finite Volume Based Computer Program for Ground Source Heat Pump Systems Project objective: Create a new modeling decisionŽ tool that will enable ground source heat pump (GSHP) designers and customers to make better design and purchasing decisions. gshp_menart_finite_volume_based.pdf (270.9 KB) More Documents & Publications Integration of Noise and Coda Correlation Data into Kinematic and

  5. Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software

    SciTech Connect

    Lee, J.R.

    1998-11-01

    As computing assumes a more central role in managing the nuclear stockpile, the consequences of an erroneous computer simulation could be severe. Computational failures are common in other endeavors and have caused project failures, significant economic loss, and loss of life. This report examines the causes of software failure and proposes steps to mitigate them. A formal verification and validation program for scientific software is recommended and described.

  6. Eight Projects Selected for NERSC's Data Intensive Computing Pilot Program

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Information Administration (EIA) Projects published on Beta are not final and may contain programming errors. They are for public testing and comment only. We welcome your feedback. For final products, please visit www.eia.gov. Read our feedback policy. Project Feedback Rea Give Us Your Feedback We welcome your feedback and insights on this project. Your Country: United States Afghanistan Albania Algeria American Samoa Angola Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia

  7. Wind energy conversion system analysis model (WECSAM) computer program documentation

    SciTech Connect

    Downey, W T; Hendrick, P L

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation. Thus, any user-supplied data for WECS performance, application load, utility rates, or wind resource may be entered into the scratch file to override the default data-base value. After the model and the inputs required from the user and derived from the data base are described, the model output and the various output options that can be exercised by the user are detailed. The general operation is set forth and suggestions are made for efficient modes of operation. Sample listings of various input, output, and data-base files are appended. (LEW)

  8. High performance computing and communications grand challenges program

    SciTech Connect

    Solomon, J.E.; Barr, A.; Chandy, K.M.; Goddard, W.A., III; Kesselman, C.

    1994-10-01

    The so-called protein folding problem has numerous aspects, however it is principally concerned with the {ital de novo} prediction of three-dimensional (3D) structure from the protein primary amino acid sequence, and with the kinetics of the protein folding process. Our current project focuses on the 3D structure prediction problem which has proved to be an elusive goal of molecular biology and biochemistry. The number of local energy minima is exponential in the number of amino acids in the protein. All current methods of 3D structure prediction attempt to alleviate this problem by imposing various constraints that effectively limit the volume of conformational space which must be searched. Our Grand Challenge project consists of two elements: (1) a hierarchical methodology for 3D protein structure prediction; and (2) development of a parallel computing environment, the Protein Folding Workbench, for carrying out a variety of protein structure prediction/modeling computations. During the first three years of this project, we are focusing on the use of two proteins selected from the Brookhaven Protein Data Base (PDB) of known structure to provide validation of our prediction algorithms and their software implementation, both serial and parallel. Both proteins, protein L from {ital peptostreptococcus magnus}, and {ital streptococcal} protein G, are known to bind to IgG, and both have an {alpha} {plus} {beta} sandwich conformation. Although both proteins bind to IgG, they do so at different sites on the immunoglobin and it is of considerable biological interest to understand structurally why this is so. 12 refs., 1 fig.

  9. A computer program to determine the specific power of prismatic-core reactors

    SciTech Connect

    Dobranich, D.

    1987-05-01

    A computer program has been developed to determine the maximum specific power for prismatic-core reactors as a function of maximum allowable fuel temperature, core pressure drop, and coolant velocity. The prismatic-core reactors consist of hexagonally shaped fuel elements grouped together to form a cylindrically shaped core. A gas coolant flows axially through circular channels within the elements, and the fuel is dispersed within the solid element material either as a composite or in the form of coated pellets. Different coolant, fuel, coating, and element materials can be selected to represent different prismatic-core concepts. The computer program allows the user to divide the core into any arbitrary number of axial levels to account for different axial power shapes. An option in the program allows the automatic determination of the core height that results in the maximum specific power. The results of parametric specific power calculations using this program are presented for various reactor concepts.

  10. DOE High Performance Computing for Manufacturing Program Seeks to Fund New

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Proposals to Advance Energy Technologies | Department of Energy Program Seeks to Fund New Proposals to Advance Energy Technologies DOE High Performance Computing for Manufacturing Program Seeks to Fund New Proposals to Advance Energy Technologies September 12, 2016 - 4:46pm Addthis News release from DOE's Advanced Manufacturing Office, September 12, 2016. The Energy Department's Advanced Manufacturing Office today announced up to $3 million in available funding for manufacturers to use

  11. Example Program and Makefile for BG/Q | Argonne Leadership Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Facility Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Example Program and Makefile for BG/Q

  12. DITTY - a computer program for calculating population dose integrated over ten thousand years

    SciTech Connect

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.

    1986-03-01

    The computer program DITTY (Dose Integrated Over Ten Thousand Years) was developed to determine the collective dose from long term nuclear waste disposal sites resulting from the ground-water pathways. DITTY estimates the time integral of collective dose over a ten-thousand-year period for time-variant radionuclide releases to surface waters, wells, or the atmosphere. This document includes the following information on DITTY: a description of the mathematical models, program designs, data file requirements, input preparation, output interpretations, sample problems, and program-generated diagnostic messages.

  13. Methods, systems, and computer program products for network firewall policy optimization

    DOEpatents

    Fulp, Errin W.; Tarsa, Stephen J.

    2011-10-18

    Methods, systems, and computer program products for firewall policy optimization are disclosed. According to one method, a firewall policy including an ordered list of firewall rules is defined. For each rule, a probability indicating a likelihood of receiving a packet matching the rule is determined. The rules are sorted in order of non-increasing probability in a manner that preserves the firewall policy.

  14. Computing single step operators of logic programming in radial basis function neural networks

    SciTech Connect

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  15. Recovery Act: Finite Volume Based Computer Program for Ground Source Heat Pump Systems

    SciTech Connect

    James A Menart, Professor

    2013-02-22

    This report is a compilation of the work that has been done on the grant DE-EE0002805 entitled Finite Volume Based Computer Program for Ground Source Heat Pump Systems. The goal of this project was to develop a detailed computer simulation tool for GSHP (ground source heat pump) heating and cooling systems. Two such tools were developed as part of this DOE (Department of Energy) grant; the first is a two-dimensional computer program called GEO2D and the second is a three-dimensional computer program called GEO3D. Both of these simulation tools provide an extensive array of results to the user. A unique aspect of both these simulation tools is the complete temperature profile information calculated and presented. Complete temperature profiles throughout the ground, casing, tube wall, and fluid are provided as a function of time. The fluid temperatures from and to the heat pump, as a function of time, are also provided. In addition to temperature information, detailed heat rate information at several locations as a function of time is determined. Heat rates between the heat pump and the building indoor environment, between the working fluid and the heat pump, and between the working fluid and the ground are computed. The heat rates between the ground and the working fluid are calculated as a function time and position along the ground loop. The heating and cooling loads of the building being fitted with a GSHP are determined with the computer program developed by DOE called ENERGYPLUS. Lastly COP (coefficient of performance) results as a function of time are provided. Both the two-dimensional and three-dimensional computer programs developed as part of this work are based upon a detailed finite volume solution of the energy equation for the ground and ground loop. Real heat pump characteristics are entered into the program and used to model the heat pump performance. Thus these computer tools simulate the coupled performance of the ground loop and the heat pump. The

  16. Finite Volume Based Computer Program for Ground Source Heat Pump System

    SciTech Connect

    Menart, James A.

    2013-02-22

    This report is a compilation of the work that has been done on the grant DE-EE0002805 entitled ?Finite Volume Based Computer Program for Ground Source Heat Pump Systems.? The goal of this project was to develop a detailed computer simulation tool for GSHP (ground source heat pump) heating and cooling systems. Two such tools were developed as part of this DOE (Department of Energy) grant; the first is a two-dimensional computer program called GEO2D and the second is a three-dimensional computer program called GEO3D. Both of these simulation tools provide an extensive array of results to the user. A unique aspect of both these simulation tools is the complete temperature profile information calculated and presented. Complete temperature profiles throughout the ground, casing, tube wall, and fluid are provided as a function of time. The fluid temperatures from and to the heat pump, as a function of time, are also provided. In addition to temperature information, detailed heat rate information at several locations as a function of time is determined. Heat rates between the heat pump and the building indoor environment, between the working fluid and the heat pump, and between the working fluid and the ground are computed. The heat rates between the ground and the working fluid are calculated as a function time and position along the ground loop. The heating and cooling loads of the building being fitted with a GSHP are determined with the computer program developed by DOE called ENERGYPLUS. Lastly COP (coefficient of performance) results as a function of time are provided. Both the two-dimensional and three-dimensional computer programs developed as part of this work are based upon a detailed finite volume solution of the energy equation for the ground and ground loop. Real heat pump characteristics are entered into the program and used to model the heat pump performance. Thus these computer tools simulate the coupled performance of the ground loop and the heat pump

  17. SNOW: a digital computer program for the simulation of ion beam devices

    SciTech Connect

    Boers, J.E.

    1980-08-01

    A digital computer program, SNOW, has been developed for the simulation of dense ion beams. The program simulates the plasma expansion cup (but not the plasma source itself), the acceleration region, and a drift space with neutralization if desired. The ion beam is simulated by computing representative trajectories through the device. The potentials are simulated on a large rectangular matrix array which is solved by iterative techniques. Poisson's equation is solved at each point within the configuration using space-charge densities computed from the ion trajectories combined with background electron and/or ion distributions. The simulation methods are described in some detail along with examples of both axially-symmetric and rectangular beams. A detailed description of the input data is presented.

  18. The Radiological Safety Analysis Computer Program (RSAC-5) user`s manual. Revision 1

    SciTech Connect

    Wenzel, D.R.

    1994-02-01

    The Radiological Safety Analysis Computer Program (RSAC-5) calculates the consequences of the release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory from either reactor operating history or nuclear criticalities. RSAC-5 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated through the inhalation, immersion, ground surface, and ingestion pathways. RSAC+, a menu-driven companion program to RSAC-5, assists users in creating and running RSAC-5 input files. This user`s manual contains the mathematical models and operating instructions for RSAC-5 and RSAC+. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-5 and RSAC+. These programs are designed for users who are familiar with radiological dose assessment methods.

  19. Towards an Abstraction-Friendly Programming Model for High Productivity and High Performance Computing

    SciTech Connect

    Liao, C; Quinlan, D; Panas, T

    2009-10-06

    General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will

  20. Princeton graduate student Imène Goumiri creates computer program that

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    helps stabilize fusion plasmas | Princeton Plasma Physics Lab Princeton graduate student Imène Goumiri creates computer program that helps stabilize fusion plasmas By John Greenwald and Raphael Rosen April 14, 2016 Tweet Widget Google Plus One Share on Facebook Imène Goumiri, a Princeton University graduate student, has worked with physicists at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) to simulate a method for limiting instabilities that reduce the

  1. User's guide to SERICPAC: A computer program for calculating electric-utility avoided costs rates

    SciTech Connect

    Wirtshafter, R.; Abrash, M.; Koved, M.; Feldman, S.

    1982-05-01

    SERICPAC is a computer program developed to calculate average avoided cost rates for decentralized power producers and cogenerators that sell electricity to electric utilities. SERICPAC works in tandem with SERICOST, a program to calculate avoided costs, and determines the appropriate rates for buying and selling of electricity from electric utilities to qualifying facilities (QF) as stipulated under Section 210 of PURA. SERICPAC contains simulation models for eight technologies including wind, hydro, biogas, and cogeneration. The simulations are converted in a diversified utility production which can be either gross production or net production, which accounts for an internal electricity usage by the QF. The program allows for adjustments to the production to be made for scheduled and forced outages. The final output of the model is a technology-specific average annual rate. The report contains a description of the technologies and the simulations as well as complete user's guide to SERICPAC.

  2. An expert computer program for classifying stars on the MK spectral classification system

    SciTech Connect

    Gray, R. O.; Corbally, C. J.

    2014-04-01

    This paper describes an expert computer program (MKCLASS) designed to classify stellar spectra on the MK Spectral Classification system in a way similar to humans—by direct comparison with the MK classification standards. Like an expert human classifier, the program first comes up with a rough spectral type, and then refines that spectral type by direct comparison with MK standards drawn from a standards library. A number of spectral peculiarities, including barium stars, Ap and Am stars, λ Bootis stars, carbon-rich giants, etc., can be detected and classified by the program. The program also evaluates the quality of the delivered spectral type. The program currently is capable of classifying spectra in the violet-green region in either the rectified or flux-calibrated format, although the accuracy of the flux calibration is not important. We report on tests of MKCLASS on spectra classified by human classifiers; those tests suggest that over the entire HR diagram, MKCLASS will classify in the temperature dimension with a precision of 0.6 spectral subclass, and in the luminosity dimension with a precision of about one half of a luminosity class. These results compare well with human classifiers.

  3. MP Salsa: a finite element computer program for reacting flow problems. Part 1--theoretical development

    SciTech Connect

    Shadid, J.N.; Moffat, H.K.; Hutchinson, S.A.; Hennigan, G.L.; Devine, K.D.; Salinger, A.G.

    1996-05-01

    The theoretical background for the finite element computer program, MPSalsa, is presented in detail. MPSalsa is designed to solve laminar, low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow, heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solver coupled multiple Poisson or advection-diffusion- reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMKIN, respectively. The code employs unstructured meshes, using the EXODUS II finite element data base suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec solver library.

  4. Princeton graduate student Imène Goumiri creates computer program that

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    helps stabilize fusion plasmas | Princeton Plasma Physics Lab Princeton graduate student Imène Goumiri creates computer program that helps stabilize fusion plasmas By John Greenwald and Raphael Rosen April 14, 2016 Tweet Widget Google Plus One Share on Facebook Imène Goumiri led the design of a controller. (Photo by Elle Starkman/Office of Communications) Imène Goumiri led the design of a controller. Imène Goumiri, a Princeton University graduate student, has worked with physicists at

  5. DUPLEX: A molecular mechanics program in torsion angle space for computing structures of DNA and RNA

    SciTech Connect

    Hingerty, B.E.

    1992-07-01

    DUPLEX produces energy minimized structures of DNA and RNA of any base sequence for single and double strands. The smallest subunits are deoxydinucleoside monophosphates, and up to 12 residues, single or double stranded can be treated. In addition, it can incorporate NMR derived interproton distances an constraints in the minimizations. Both upper and lower bounds for these distances can be specified. The program has been designed to run on a UNICOS Cray supercomputer, but should run, albeit slowly, on a laboratory computer such as a VAX or a workstation.

  6. OPPDIF: A Fortran program for computing opposed-flow diffusion flames

    SciTech Connect

    Lutz, A.E.; Kee, R.J.; Grcar, J.F.; Rupley, F.M.

    1997-05-01

    OPPDIF is a Fortran program that computes the diffusion flame between two opposing nozzles. A similarity transformation reduces the two-dimensional axisymmetric flow field to a one-dimensional problem. Assuming that the radial component of velocity is linear in radius, the dependent variables become functions of the axial direction only. OPPDIF solves for the temperature, species mass fractions, axial and radial velocity components, and radial pressure gradient, which is an eigenvalue in the problem. The TWOPNT software solves the two-point boundary value problem for the steady-state form of the discretized equations. The CHEMKIN package evaluates chemical reaction rates and thermodynamic and transport properties.

  7. Method, systems, and computer program products for implementing function-parallel network firewall

    DOEpatents

    Fulp, Errin W.; Farley, Ryan J.

    2011-10-11

    Methods, systems, and computer program products for providing function-parallel firewalls are disclosed. According to one aspect, a function-parallel firewall includes a first firewall node for filtering received packets using a first portion of a rule set including a plurality of rules. The first portion includes less than all of the rules in the rule set. At least one second firewall node filters packets using a second portion of the rule set. The second portion includes at least one rule in the rule set that is not present in the first portion. The first and second portions together include all of the rules in the rule set.

  8. Items Supporting the Hanford Internal Dosimetry Program Implementation of the IMBA Computer Code

    SciTech Connect

    Carbaugh, Eugene H.; Bihl, Donald E.

    2008-01-07

    The Hanford Internal Dosimetry Program has adopted the computer code IMBA (Integrated Modules for Bioassay Analysis) as its primary code for bioassay data evaluation and dose assessment using methodologies of ICRP Publications 60, 66, 67, 68, and 78. The adoption of this code was part of the implementation plan for the June 8, 2007 amendments to 10 CFR 835. This information release includes action items unique to IMBA that were required by PNNL quality assurance standards for implementation of safety software. Copie of the IMBA software verification test plan and the outline of the briefing given to new users are also included.

  9. REFLECT: A computer program for the x-ray reflectivity of bent perfect crystals

    SciTech Connect

    Etelaeniemi, V.; Suortti, P.; Thomlinson, W. . Dept. of Physics; Brookhaven National Lab., Upton, NY )

    1989-09-01

    The design of monochromators for x-ray applications, using either standard laboratory sources on synchrotron radiation sources, requires a knowledge of the reflectivity of the crystals. The reflectivity depends on the crystals used, the geometry of the reflection, the energy range of the radiation, and, in the present case, the cylindrical bending radius of the optical device. This report is intended to allow the reader to become familiar with, and therefore use, a computer program called REFLECT which we have used in the design of a dual beam Laue monochromator for synchrotron angiography. The results of REFLECT have been compared to measured reflectivities for both bent Bragg and Laue geometries. The results are excellent and should give full confidence in the use of the program. 6 refs.

  10. THE SAP3 COMPUTER PROGRAM FOR QUANTITATIVE MULTIELEMENT ANALYSIS BY ENERGY DISPERSIVE X-RAY FLUORESCENCE

    SciTech Connect

    Nielson, K. K.; Sanders, R. W.

    1982-04-01

    SAP3 is a dual-function FORTRAN computer program which performs peak analysis of energy-dispersive x-ray fluorescence spectra and then quantitatively interprets the results of the multielement analysis. It was written for mono- or bi-chromatic excitation as from an isotopic or secondary excitation source, and uses the separate incoherent and coherent backscatter intensities to define the bulk sample matrix composition. This composition is used in performing fundamental-parameter matrix corrections for self-absorption, enhancement, and particle-size effects, obviating the need for specific calibrations for a given sample matrix. The generalized calibration is based on a set of thin-film sensitivities, which are stored in a library disk file and used for all sample matrices and thicknesses. Peak overlap factors are also determined from the thin-film standards, and are stored in the library for calculating peak overlap corrections. A detailed description is given of the algorithms and program logic, and the program listing and flow charts are also provided. An auxiliary program, SPCAL, is also given for use in calibrating the backscatter intensities. SAP3 provides numerous analysis options via seventeen control switches which give flexibility in performing the calculations best suited to the sample and the user needs. User input may be limited to the name of the library, the analysis livetime, and the spectrum filename and location. Output includes all peak analysis information, matrix correction factors, and element concentrations, uncertainties and detection limits. Twenty-four elements are typically determined from a 1024-channel spectrum in one-to-two minutes using a PDP-11/34 computer operating under RSX-11M.

  11. Audit of Selected Aspects of the Unclassified Computer Security Program at a DOE Headquarters Computing Facility, AP-B-95-02

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    OFFICE OF INSPECTOR GENERAL AUDIT OF SELECTED ASPECTS OF THE UNCLASSIFIED COMPUTER SECURITY PROGRAM AT A DOE HEADQUARTERS COMPUTING FACILITY The Office of Inspector General wants to make the distribution of its reports as customer friendly and cost effective as possible. Therefore, this report will be available electronically through the Internet five to seven days after publication at the alternative addresses: Department of Energy Headquarters Gopher gopher.hr.doe.gov Department of Energy

  12. Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Computing Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest. News Releases Science Briefs Photos Picture of the Week Publications Social Media Videos Fact Sheets Since 1978 Los Alamos has won 137 of the prestigious R&D 100 Awards. Los Alamos honored for industry collaboration in 2016 HPCwire Awards Los Alamos National Laboratory has been recognized for the Lab's collaboration with

  13. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  14. PABLM: a computer program to calculate accumulated radiation doses from radionuclides in the environment

    SciTech Connect

    Napier, B.A.; Kennedy, W.E. Jr.; Soldat, J.K.

    1980-03-01

    A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach.

  15. SALE: a simplified ALE computer program for fluid flow at all speeds

    SciTech Connect

    Amsden, A.A.; Ruppel, H.M.; Hirt, C.W.

    1980-06-01

    A simplified numerical fluid-dynamics computing technique is presented for calculating two-dimensional fluid flows at all speeds. It combines an implicit treatment of the pressure equation similar to that in the Implicit Continuous-fluid Eulerian (ICE) technique with the grid rezoning philosophy of the Arbitrary Lagrangian-Eulerian (ALE) method. As a result, it can handle flow speeds from supersonic to the incompressible limit in a grid that may be moved with the fluid in typical Lagrangian fashion, or held fixed in an Eulerian manner, or moved in some arbitrary way to give a continuous rezoning capability. The report describes the combined (ICEd-ALE) technique in the framework of the SALE (Simplified ALE) computer program, for which a general flow diagram and complete FORTRAN listing are included. A set of sample problems show how to use or modify the basic code for a variety of applications. Numerical listings are provided for a sample problem run with the SALE program.

  16. Open-cycle ocean thermal energy conversion surface-condenser design analysis and computer program

    SciTech Connect

    Panchal, C.B.; Rabas, T.J.

    1991-05-01

    This report documents a computer program for designing a surface condenser that condenses low-pressure steam in an ocean thermal energy conversion (OTEC) power plant. The primary emphasis is on the open-cycle (OC) OTEC power system, although the same condenser design can be used for conventional and hybrid cycles because of their highly similar operating conditions. In an OC-OTEC system, the pressure level is very low (deep vacuums), temperature differences are small, and the inlet noncondensable gas concentrations are high. Because current condenser designs, such as the shell-and-tube, are not adequate for such conditions, a plate-fin configuration is selected. This design can be implemented in aluminum, which makes it very cost-effective when compared with other state-of-the-art vacuum steam condenser designs. Support for selecting a plate-fin heat exchanger for OC-OTEC steam condensation can be found in the sizing (geometric details) and rating (heat transfer and pressure drop) calculations presented. These calculations are then used in a computer program to obtain all the necessary thermal performance details for developing design specifications for a plate-fin steam condenser. 20 refs., 5 figs., 5 tabs.

  17. NASTRAN-based computer program for structural dynamic analysis of horizontal axis wind turbines

    SciTech Connect

    Lobitz, D.W.

    1984-01-01

    This paper describes a computer program developed for structural dynamic analysis of horizontal axis wind turbines (HAWTs). It is based on the finite element method through its reliance on NASTRAN for the development of mass, stiffness, and damping matrices of the tower and rotor, which are treated in NASTRAN as separate structures. The tower is modeled in a stationary frame and the rotor in one rotating at a constant angular velocity. The two structures are subsequently joined together (external to NASTRAN) using a time-dependent transformation consistent with the hub configuration. Aerodynamic loads are computed with an established flow model based on strip theory. Aeroelastic effects are included by incorporating the local velocity and twisting deformation of the blade in the load computation. The turbulent nature of the wind, both in space and time, is modeled by adding in stochastic wind increments. The resulting equations of motion are solved in the time domain using the implicit Newmark-Beta integrator. Preliminary comparisons with data from the Boeing/NASA MOD2 HAWT indicate that the code is capable of accurately and efficiently predicting the response of HAWTs driven by turbulent winds.

  18. Computational Analysis of an Evolutionarily Conserved VertebrateMuscle Alternative Splicing Program

    SciTech Connect

    Das, Debopriya; Clark, Tyson A.; Schweitzer, Anthony; Marr,Henry; Yamamoto, Miki L.; Parra, Marilyn K.; Arribere, Josh; Minovitsky,Simon; Dubchak, Inna; Blume, John E.; Conboy, John G.

    2006-06-15

    A novel exon microarray format that probes gene expression with single exon resolution was employed to elucidate critical features of a vertebrate muscle alternative splicing program. A dataset of 56 microarray-defined, muscle-enriched exons and their flanking introns were examined computationally in order to investigate coordination of the muscle splicing program. Candidate intron regulatory motifs were required to meet several stringent criteria: significant over-representation near muscle-enriched exons, correlation with muscle expression, and phylogenetic conservation among genomes of several vertebrate orders. Three classes of regulatory motifs were identified in the proximal downstream intron, within 200nt of the target exons: UGCAUG, a specific binding site for Fox-1 related splicing factors; ACUAAC, a novel branchpoint-like element; and UG-/UGC-rich elements characteristic of binding sites for CELF splicing factors. UGCAUG was remarkably enriched, being present in nearly one-half of all cases. These studies suggest that Fox and CELF splicing factors play a major role in enforcing the muscle-specific alternative splicing program, facilitating expression of a set of unique isoforms of cytoskeletal proteins that are critical to muscle cell differentiation. Supplementary materials: There are four supplementary tables and one supplementary figure. The tables provide additional detailed information concerning the muscle-enriched datasets, and about over-represented oligonucleotide sequences in the flanking introns. The supplementary figure shows RT-PCR data confirming the muscle-enriched expression of exons predicted from the microarray analysis.

  19. LIAR -- A computer program for the modeling and simulation of high performance linacs

    SciTech Connect

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm.

  20. A computer program for engineering simulations of space reactor system performance

    SciTech Connect

    Dobranich, D. )

    1992-01-01

    Nuclear thermal propulsion systems are envisioned as a fast and efficient form of transportation for the exploration of space. The short transit time afforded by nuclear rockets is especially attractive for a manned mission to Mars. Several nuclear reactor concepts have been proposed for such a system, including prismatic reactors and particle-bed reactors. These concepts have their merits but need to be evalauted in the context of system performance. SAFSIM (system analysis flow simulator) is an engineering computer program that allows the fluid mechanic, heat transfer, and reactor dynamic simulation of the entire propulsion system. The motivation for SAFSIM is the desire to have a tool to provide quick and inexpensive engineering performance simulations of complicated systems. The simulations are intended to provide a first-look understanding of the systems transient behavior under operational and off-normal conditions.

    1. Radiological Safety Analysis Computer (RSAC) Program Version 7.2 Users’ Manual

      SciTech Connect

      Dr. Bradley J Schrader

      2010-10-01

      The Radiological Safety Analysis Computer (RSAC) Program Version 7.2 (RSAC-7) is the newest version of the RSAC legacy code. It calculates the consequences of a release of radionuclides to the atmosphere. A user can generate a fission product inventory from either reactor operating history or a nuclear criticality event. RSAC-7 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates the decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated for inhalation, air immersion, ground surface, ingestion, and cloud gamma pathways. RSAC-7 can be used as a tool to evaluate accident conditions in emergency response scenarios, radiological sabotage events and to evaluate safety basis accident consequences. This users’ manual contains the mathematical models and operating instructions for RSAC-7. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-7. This program was designed for users who are familiar with radiological dose assessment methods.

    2. Radiological Safety Analysis Computer (RSAC) Program Version 7.0 Users’ Manual

      SciTech Connect

      Dr. Bradley J Schrader

      2009-03-01

      The Radiological Safety Analysis Computer (RSAC) Program Version 7.0 (RSAC-7) is the newest version of the RSAC legacy code. It calculates the consequences of a release of radionuclides to the atmosphere. A user can generate a fission product inventory from either reactor operating history or a nuclear criticality event. RSAC-7 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates the decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated for inhalation, air immersion, ground surface, ingestion, and cloud gamma pathways. RSAC-7 can be used as a tool to evaluate accident conditions in emergency response scenarios, radiological sabotage events and to evaluate safety basis accident consequences. This users’ manual contains the mathematical models and operating instructions for RSAC-7. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-7. This program was designed for users who are familiar with radiological dose assessment methods.

    3. Advanced Scientific Computing Research

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, ... The DOE Office of Science's Advanced Scientific Computing Research (ASCR) program ...

    4. Computing

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. ! Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13 Garching bei München, Germany, July 17, 2007 ScicomP 13, July 17, 2007, Garching Overview * About Bassi * Memory on Bassi * Large Page Memory (It's Great!) * System Configuration * Large Page

    5. SIMMER-II: A computer program for LMFBR disrupted core analysis

      SciTech Connect

      Bohl, W.R.; Luck, L.B.

      1990-06-01

      SIMMER-2 (Version 12) is a computer program to predict the coupled neutronic and fluid-dynamics behavior of liquid-metal fast reactors during core-disruptive accident transients. The modeling philosophy is based on the use of general, but approximate, physics to represent interactions of accident phenomena and regimes rather than a detailed representation of specialized situations. Reactor neutronic behavior is predicted by solving space (r,z), energy, and time-dependent neutron conservation equations (discrete ordinates transport or diffusion). The neutronics and the fluid dynamics are coupled via temperature- and background-dependent cross sections and the reactor power distribution. The fluid-dynamics calculation solves multicomponent, multiphase, multifield equations for mass, momentum, and energy conservation in (r,z) or (x,y) geometry. A structure field with nine density and five energy components; a liquid field with eight density and six energy components; and a vapor field with six density and on energy component are coupled by exchange functions representing a modified-dispersed flow regime with a zero-dimensional intra-cell structure model.

    6. CriTi-CAL: A computer program for Critical Coiled Tubing Calculations

      SciTech Connect

      He, X.

      1995-12-31

      A computer software package for simulating coiled tubing operations has been developed at Rogaland Research. The software is named CriTiCAL, for Critical Coiled Tubing Calculations. It is a PC program running under Microsoft Windows. CriTi-CAL is designed for predicting force, stress, torque, lockup, circulation pressure losses and along-hole-depth corrections for coiled tubing workover and drilling operations. CriTi-CAL features an user-friendly interface, integrated work string and survey editors, flexible input units and output format, on-line documentation and extensive error trapping. CriTi-CAL was developed by using a combination of Visual Basic and C. Such an approach is an effective way to quickly develop high quality small to medium size software for the oil industry. The software is based on the results of intensive experimental and theoretical studies on buckling and post-buckling of coiled tubing at Rogaland Research. The software has been validated by full-scale test results and field data.

    7. Efficiency Improvement Opportunities for Personal Computer Monitors. Implications for Market Transformation Programs

      SciTech Connect

      Park, Won Young; Phadke, Amol; Shah, Nihar

      2012-06-29

      Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that display efficiency will likely improve by over 40% by 2015 compared to todays technology. We evaluate the cost effectiveness of a key technology which further improves efficiency beyond this level by at least 20% and find that its adoption is cost effective. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus (USB) powered liquid crystal display (LCD) monitors and find that the current technology available and deployed in USB powered monitors has the potential to deeply reduce energy consumption by as much as 50%. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to capture global energy saving potential from PC monitors which we estimate to be 9.2 terawatt-hours [TWh] per year in 2015.

    8. CONC/11: a computer program for calculating the performance of dish-type solar thermal collectors and power systems

      SciTech Connect

      Jaffe, L. D.

      1984-02-15

      CONC/11 is a computer program designed for calculating the performance of dish-type solar thermal collectors and power systems. It is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. CONC/11 is written in Athena Extended Fortran (similar to Fortran 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers.

    9. Computer program for the sensitivity calculation of a CR-39 detector in a diffusion chamber for radon measurements

      SciTech Connect

      Nikezic, D. Stajic, J. M.; Yu, K. N.

      2014-02-15

      Computer software for calculation of the sensitivity of a CR-39 detector closed in a diffusion chamber to radon is described in this work. The software consists of two programs, both written in the standard Fortran 90 programming language. The physical background and a numerical example are given. Presented software is intended for numerous researches in radon measurement community. Previously published computer programs TRACK-TEST.F90 and TRACK-VISION.F90 [D. Nikezic and K. N. Yu, Comput. Phys. Commun. 174, 160 (2006); D. Nikezic and K. N. Yu, Comput. Phys. Commun. 178, 591 (2008)] are used here as subroutines to calculate the track parameters and to determine whether the track is visible or not, based on the incident angle, impact energy, etching conditions, gray level, and visibility criterion. The results obtained by the software, using five different V functions, were compared with the experimental data found in the literature. Application of two functions in this software reproduced experimental data very well, while other three gave lower sensitivity than experiment.

    10. DOE High Performance Computing for Manufacturing Program Seeks to Fund New Proposals to Advance Energy Technologies

      Energy.gov [DOE]

      The Energy Department’s Advanced Manufacturing Office today announced up to $3 million in available funding for manufacturers to use high-performance computing resources at the Department's national laboratories to tackle major manufacturing challenges.

    11. Programs for attracting under-represented minority students to graduate school and research careers in computational science. Final report for period October 1, 1995 - September 30, 1997

      SciTech Connect

      Turner, James C. Jr.; Mason, Thomas; Guerrieri, Bruno

      1997-10-01

      Programs have been established at Florida A & M University to attract minority students to research careers in mathematics and computational science. The primary goal of the program was to increase the number of such students studying computational science via an interactive multimedia learning environment One mechanism used for meeting this goal was the development of educational modules. This academic year program established within the mathematics department at Florida A&M University, introduced students to computational science projects using high-performance computers. Additional activities were conducted during the summer, these included workshops, meetings, and lectures. Through the exposure provided by this program to scientific ideas and research in computational science, it is likely that their successful applications of tools from this interdisciplinary field will be high.

    12. Computing Information

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Information From here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop. Kerberos. AFS. Printing. Recommended applications for various common tasks. Running CPU- or IO-intensive programs (batch jobs) Commonly encountered problems Computing support within BooNE Bringing a computer to FNAL, or purchasing a new one. Laptops. The Computer Security Program Plan for MiniBooNE The

    13. Fourth SIAM conference on mathematical and computational issues in the geosciences: Final program and abstracts

      SciTech Connect

      1997-12-31

      The conference focused on computational and modeling issues in the geosciences. Of the geosciences, problems associated with phenomena occurring in the earth`s subsurface were best represented. Topics in this area included petroleum recovery, ground water contamination and remediation, seismic imaging, parameter estimation, upscaling, geostatistical heterogeneity, reservoir and aquifer characterization, optimal well placement and pumping strategies, and geochemistry. Additional sessions were devoted to the atmosphere, surface water and oceans. The central mathematical themes included computational algorithms and numerical analysis, parallel computing, mathematical analysis of partial differential equations, statistical and stochastic methods, optimization, inversion, homogenization and renormalization. The problem areas discussed at this conference are of considerable national importance, with the increasing importance of environmental issues, global change, remediation of waste sites, declining domestic energy sources and an increasing reliance on producing the most out of established oil reservoirs.

    14. THERM3D -- A boundary element computer program for transient heat conduction problems

      SciTech Connect

      Ingber, M.S.

      1994-02-01

      The computer code THERM3D implements the direct boundary element method (BEM) to solve transient heat conduction problems in arbitrary three-dimensional domains. This particular implementation of the BEM avoids performing time-consuming domain integrations by approximating a ``generalized forcing function`` in the interior of the domain with the use of radial basis functions. An approximate particular solution is then constructed, and the original problem is transformed into a sequence of Laplace problems. The code is capable of handling a large variety of boundary conditions including isothermal, specified flux, convection, radiation, and combined convection and radiation conditions. The computer code is benchmarked by comparisons with analytic and finite element results.

    15. Energy Department's High Performance Computing for Manufacturing Program Seeks to Fund New Industry Proposals

      Energy.gov [DOE]

      The U.S. Department of Energy (DOE) is seeking concept proposals from qualified U.S. manufacturers to participate in short-term, collaborative projects. Selectees will be given access to High Performance Computing facilities and will work with experienced DOE National Laboratories staff in addressing challenges in U.S. manufacturing.

    16. Opportunities for Russian Nuclear Weapons Institute developing computer-aided design programs for pharmaceutical drug discovery. Final report

      SciTech Connect

      1996-09-23

      The goal of this study is to determine whether physicists at the Russian Nuclear Weapons Institute can profitably service the need for computer aided drug design (CADD) programs. The Russian physicists` primary competitive advantage is their ability to write particularly efficient code able to work with limited computing power; a history of working with very large, complex modeling systems; an extensive knowledge of physics and mathematics, and price competitiveness. Their primary competitive disadvantage is their lack of biology, and cultural and geographic issues. The first phase of the study focused on defining the competitive landscape, primarily through interviews with and literature searches on the key providers of CADD software. The second phase focused on users of CADD technology to determine deficiencies in the current product offerings, to understand what product they most desired, and to define the potential demand for such a product.

    17. User's manual for RATEPAC: a digital-computer program for revenue requirements and rate-impact analysis

      SciTech Connect

      Fuller, L.C.

      1981-09-01

      The RATEPAC computer program is designed to model the financial aspects of an electric power plant or other investment requiring capital outlays and having annual operating expenses. The program produces incremental pro forma financial statements showing how an investment will affect the overall financial statements of a business entity. The code accepts parameters required to determine capital investment and expense as a function of time and sums these to determine minimum revenue requirements (cost of service). The code also calculates present worth of revenue requirements and required return on rate base. This user's manual includes a general description of the code as well as the instructions for input data preparation. A complete example case is appended.

    18. Eighth SIAM conference on parallel processing for scientific computing: Final program and abstracts

      SciTech Connect

      1997-12-31

      This SIAM conference is the premier forum for developments in parallel numerical algorithms, a field that has seen very lively and fruitful developments over the past decade, and whose health is still robust. Themes for this conference were: combinatorial optimization; data-parallel languages; large-scale parallel applications; message-passing; molecular modeling; parallel I/O; parallel libraries; parallel software tools; parallel compilers; particle simulations; problem-solving environments; and sparse matrix computations.

    19. MPSalsa a finite element computer program for reacting flow problems. Part 2 - user`s guide

      SciTech Connect

      Salinger, A.; Devine, K.; Hennigan, G.; Moffat, H.

      1996-09-01

      This manual describes the use of MPSalsa, an unstructured finite element (FE) code for solving chemically reacting flow problems on massively parallel computers. MPSalsa has been written to enable the rigorous modeling of the complex geometry and physics found in engineering systems that exhibit coupled fluid flow, heat transfer, mass transfer, and detailed reactions. In addition, considerable effort has been made to ensure that the code makes efficient use of the computational resources of massively parallel (MP), distributed memory architectures in a way that is nearly transparent to the user. The result is the ability to simultaneously model both three-dimensional geometries and flow as well as detailed reaction chemistry in a timely manner on MT computers, an ability we believe to be unique. MPSalsa has been designed to allow the experienced researcher considerable flexibility in modeling a system. Any combination of the momentum equations, energy balance, and an arbitrary number of species mass balances can be solved. The physical and transport properties can be specified as constants, as functions, or taken from the Chemkin library and associated database. Any of the standard set of boundary conditions and source terms can be adapted by writing user functions, for which templates and examples exist.

    20. MILDOS - A Computer Program for Calculating Environmental Radiation Doses from Uranium Recovery Operations

      SciTech Connect

      Strange, D. L.; Bander, T. J.

      1981-04-01

      The MILDOS Computer Code estimates impacts from radioactive emissions from uranium milling facilities. These impacts are presented as dose commitments to individuals and the regional population within an 80 km radius of the facility. Only airborne releases of radioactive materials are considered: releases to surface water and to groundwater are not addressed in MILDOS. This code is multi-purposed and can be used to evaluate population doses for NEPA assessments, maximum individual doses for predictive 40 CFR 190 compliance evaluations, or maximum offsite air concentrations for predictive evaluations of 10 CFR 20 compliance. Emissions of radioactive materials from fixed point source locations and from area sources are modeled using a sector-averaged Gaussian plume dispersion model, which utilizes user-provided wind frequency data. Mechanisms such as deposition of particulates, resuspension. radioactive decay and ingrowth of daughter radionuclides are included in the transport model. Annual average air concentrations are computed, from which subsequent impacts to humans through various pathways are computed. Ground surface concentrations are estimated from deposition buildup and ingrowth of radioactive daughters. The surface concentrations are modified by radioactive decay, weathering and other environmental processes. The MILDOS Computer Code allows the user to vary the emission sources as a step function of time by adjustinq the emission rates. which includes shutting them off completely. Thus the results of a computer run can be made to reflect changing processes throughout the facility's operational lifetime. The pathways considered for individual dose commitments and for population impacts are: • Inhalation • External exposure from ground concentrations • External exposure from cloud immersion • Ingestioo of vegetables • Ingestion of meat • Ingestion of milk • Dose commitments are calculated using dose conversion factors, which are ultimately based

    1. Development of computer program ENMASK for prediction of residual environmental masking-noise spectra, from any three independent environmental parameters

      SciTech Connect

      Chang, Y.-S.; Liebich, R. E.; Chun, K. C.

      2000-03-31

      Residual environmental sound can mask intrusive4 (unwanted) sound. It is a factor that can affect noise impacts and must be considered both in noise-impact studies and in noise-mitigation designs. Models for quantitative prediction of sensation level (audibility) and psychological effects of intrusive noise require an input with 1/3 octave-band spectral resolution of environmental masking noise. However, the majority of published residual environmental masking-noise data are given with either octave-band frequency resolution or only single A-weighted decibel values. A model has been developed that enables estimation of 1/3 octave-band residual environmental masking-noise spectra and relates certain environmental parameters to A-weighted sound level. This model provides a correlation among three environmental conditions: measured residual A-weighted sound-pressure level, proximity to a major roadway, and population density. Cited field-study data were used to compute the most probable 1/3 octave-band sound-pressure spectrum corresponding to any selected one of these three inputs. In turn, such spectra can be used as an input to models for prediction of noise impacts. This paper discusses specific algorithms included in the newly developed computer program ENMASK. In addition, the relative audibility of the environmental masking-noise spectra at different A-weighted sound levels is discussed, which is determined by using the methodology of program ENAUDIBL.

    2. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

      SciTech Connect

      Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.

      1997-12-31

      The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle. The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.

    3. Computing Videos

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Computing Videos Computing

    4. Ocean-ice/oil-weathering computer program user's manual. Final report

      SciTech Connect

      Kirstein, B.E.; Redding, R.T.

      1987-10-01

      The ocean-ice/oil-weathering code is written in FORTRAN as a series of stand-alone subroutines that can easily be installed on most any computer. All of the trial-and-error routines, integration routines, and other special routines are written in the code so that nothing more than the normal system functions such as EXP are required. The code is user-interactive and requests input by prompting questions with suggested input. Therefore, the user can actually learn about the nature of crude oil and oil weathering by using this code. The ocean-ice oil-weathering model considers the following weathering processes: evaporation; dispersion (oil into water); moussee (water into oil); and spreading; These processes are used to predict the mass balance and composition of oil remaining in the slick as a function of time and environmental parameters.

    5. A Computer Program for Processing In Situ Permeable Flow Sensor Data

      Energy Science and Technology Software Center

      1996-04-15

      FLOW4.02 is used to interpret data from In Situ Permeable Flow Sensors which are instruments that directly measure groundwater flow velocity in saturated, unconsolidated geologic formations (Ballard, 1994, 1996: Ballard et al., 1994: Ballard et al., in press). The program accepts as input the electrical resistance measurements from the thermistors incorporated within the flow sensors, converts the resistance data to temperatures and then uses the temperature information to calculate the groundwater flow velocity and associatedmore » uncertainty. The software includes many capabilities for manipulating, graphically displaying and writing to disk the raw resistance data, the temperature data and the calculated flow velocity information. This version is a major revision of a previously copyrighted version (FLOW1.0).« less

    6. Load determination for long cable bolt support using computer aided bolt load estimation (CABLE) program

      SciTech Connect

      Bawden, W.F.; Moosavi, M.; Hyett, A.J.

      1996-12-01

      In this paper a numerical formulation is presented for determination of the axial load along a cable bolt for a prescribed distribution of rock mass displacement. Results using the program CABLE indicate that during excavation, the load distribution that develops along an untensioned fully grouted cable bolt depends on three main factors: (i) the properties of the cable itself, (ii) the shear force that develops due to bond at the cable-grout interface (i.e. bond stiffness), and (iii) the distribution of rock mass displacement along the cable bolt length. in general, the effect of low modulus rock and mining induced stress decreases in reducing bond strength as determined from short embedment length tests, is reflected in the development of axial loads significantly less than the ultimate tensile capacity even for long cable bolts. However, the load distribution is also dependent on the deformation distribution in the reinforced rock mass. Higher cable bolt loads will be developed for a rock mass that behaves as a discontinuum, with deformation concentrated on a few fractures, than for one which behaves as a continuum, either due to a total lack of fractures or a very high fracture density. This result suggests that the stiffness of a fully grouted cable bolt is not simply a characteristic of the bolt and grout used, but also of the deformation behavior of the ground. In other words, the same combination of bolt and grout will be stiffer if the rock behaves as a discontinuum than if it behaves as a continuum. This paper also explains the laboratory test program used to determine the constitutive behavior of the Garford bulb and Nutcase cables bolts. Details of the test setup as well as the obtained results are summarized and discussed.

    7. TRUST: A Computer Program for Variably Saturated Flow in Multidimensional, Deformable Media

      SciTech Connect

      Reisenauer, A. E.; Key, K. T.; Narasimhan, T. N.; Nelson, R. W.

      1982-01-01

      The computer code, TRUST. provides a versatile tool to solve a wide spectrum of fluid flow problems arising in variably saturated deformable porous media. The governing equations express the conservation of fluid mass in an elemental volume that has a constant volume of solid. Deformation of the skeleton may be nonelastic. Permeability and compressibility coefficients may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may include hysteresis. The code developed by T. N. Narasimhan grew out of the original TRUNP code written by A. L. Edwards. The code uses an integrated finite difference algorithm for numerically solving the governing equation. Narching in time is performed by a mixed explicit-implicit numerical procedure in which the time step is internally controlled. The time step control and related feature in the TRUST code provide an effective control of the potential numerical instabilities that can arise in the course of solving this difficult class of nonlinear boundary value problem. This document brings together the equations, theory, and users manual for the code as well as a sample case with input and output.

    8. User`s manual for EROSION/MOD1: A computer program for fluids-solids erosion

      SciTech Connect

      Lyczkowski, R.W.; Bouillard, J.X.; Folga, S.M.; Chang, S.L.

      1992-09-01

      This report describes EROSION/MOD1, a computer program that was developed as a two-dimensional analytical tool for the general analysis of erosion in fluid-solids systems and the specific analysis of erosion in bubbling fluidized-bed combustors. Contained herein are implementations of Finnie`s impaction erosion model, Neilson and Gilchrist`s combined ductile and brittle erosion model, and several forms of the monolayer energy dissipation erosion model. These models and their implementations are described briefly. The global structure of EROSION/MOD1 that contains these models is also discussed. The input data for EROSION/MOD1 are given, and a sample problem for a fluidized bed is described. The hydrodynamic input data are assumed to come from the output of FLUFIX/MOD2.

    9. MASBAL: A computer program for predicting the composition of nuclear waste glass produced by a slurry-fed ceramic melter

      SciTech Connect

      Reimus, P.W.

      1987-07-01

      This report is a user's manual for the MASBAL computer program. MASBAL's objectives are to predict the composition of nuclear waste glass produced by a slurry-fed ceramic melter based on a knowledge of process conditions; to generate simulated data that can be used to estimate the uncertainty in the predicted glass composition as a function of process uncertainties; and to generate simulated data that can be used to provide a measure of the inherent variability in the glass composition as a function of the inherent variability in the feed composition. These three capabilities are important to nuclear waste glass producers because there are constraints on the range of compositions that can be processed in a ceramic melter and on the range of compositions that will be acceptable for disposal in a geologic repository. MASBAL was developed specifically to simulate the operation of the West Valley Component Test system, a commercial-scale ceramic melter system that will process high-level nuclear wastes currently stored in underground tanks at the site of the Western New York Nuclear Services Center (near West Valley, New York). The program is flexible enough, however, to simulate any slurry-fed ceramic melter system. 4 refs., 16 figs., 5 tabs.

    10. Center for Programming Models for Scalable Parallel Computing - Towards Enhancing OpenMP for Manycore and Heterogeneous Nodes

      SciTech Connect

      Barbara Chapman

      2012-02-01

      OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close to DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.

    11. Angular neutron transport investigation in the HZETRN free-space ion and nucleon transport and shielding computer program

      SciTech Connect

      Singleterry, R.C. Jr.; Wilson, J.W.

      1997-05-01

      Extension of the high charge and energy (HZE) transport computer program HZETRN for angular transport of neutrons is considered. For this paper, only light ion transport, He{sup 4} and lighter, will be analyzed using a pure solar proton source. The angular transport calculator is the ANISN/PC program which is being controlled by the HZETRN program. The neutron flux values are compared for straight-ahead transport and angular transport in one dimension. The shield material is aluminum and the target material is water. The thickness of these materials is varied; however, only the largest model calculated is reported which is 50 gm/cm{sup 2} of aluminum and 100 gm/cm{sup 2} of water. The flux from the ANISN/PC calculation is about two orders of magnitude lower than the flux from HZETRN for very low energy neutrons. It is only a magnitude lower for the neutrons in the 10 to 20 MeV range in the aluminum and two orders lower in the water. The major reason for this difference is in the transport modes: straight-ahead versus angular. The angular treatment allows a longer path length than the straight-ahead approximation. Another reason is the different cross section sets used by the ANISN/PC-BUGLE-80 mode and the HZETRN mode. The next step is to investigate further the differences between the two codes and isolate the differences to just the angular versus straight-ahead transport mode. Then, create a better coupling between the angular neutron transport and the charged particle transport.

    12. A user`s guide to LUGSAN II. A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

      SciTech Connect

      Dunn, W.N.

      1998-03-01

      LUG and Sway brace ANalysis (LUGSAN) II is an analysis and database computer program that is designed to calculate store lug and sway brace loads for aircraft captive carriage. LUGSAN II combines the rigid body dynamics code, SWAY85, with a Macintosh Hypercard database to function both as an analysis and archival system. This report describes the LUGSAN II application program, which operates on the Macintosh System (Hypercard 2.2 or later) and includes function descriptions, layout examples, and sample sessions. Although this report is primarily a user`s manual, a brief overview of the LUGSAN II computer code is included with suggested resources for programmers.

    13. Introduction to Radcalc: A computer program to calculate the radiolytic production of hydrogen gas from radioactive wastes in packages

      SciTech Connect

      Green, J.R.; Hillesland, K.E.; Field, J.G.

      1995-04-01

      A calculational technique for quantifying the concentration of hydrogen generated by radiolysis in sealed radioactive waste containers was developed in a U.S. Department of Energy (DOE) study conducted by EG&G Idaho, Inc., and the Electric Power Research Institute (EPRI) TMI-2 Technology Transfer Office. The study resulted in report GEND-041, entitled {open_quotes}A Calculational Technique to Predict Combustible Gas Generation in Sealed Radioactive Waste Containers{close_quotes}. The study also resulted in a presentation to the U.S. Nuclear Regulatory Commission (NRC) which gained acceptance of the methodology for use in ensuring compliance with NRC IE Information Notice No. 84-72 (NRC 1984) concerning the generation of hydrogen within packages. NRC IE Information Notice No. 84-72: {open_quotes}Clarification of Conditions for Waste Shipments Subject to Hydrogen Gas Generation{close_quotes} applies to any package containing water and/or organic substances that could radiolytically generate combustible gases. EPRI developed a simple computer program in a spreadsheet format utilizing GEND-041 calculational methodology to predict hydrogen gas concentrations in low-level radioactive wastes containers termed Radcalc. The computer code was extensively benchmarked against TMI-2 (Three Mile Island) EPICOR II resin bed measurements. The benchmarking showed that the model developed predicted hydrogen gas concentrations within 20% of the measured concentrations. Radcalc for Windows was developed using the same calculational methodology. The code is written in Microsoft Visual C++ 2.0 and includes a Microsoft Windows compatible menu-driven front end. In addition to hydrogen gas concentration calculations, Radcalc for Windows also provides transportation and packaging information such as pressure buildup, total activity, decay heat, fissile activity, TRU activity, and transportation classifications.

    14. Programming

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Programming Programming Compiling and linking programs on Euclid. Compiling Codes How to compile and link MPI codes on Euclid. Read More » Using the ACML Math Library How to compile and link a code with the ACML library and include the $ACML environment variable. Read More » Process Limits The hard and soft process limits are listed. Read More » Last edited: 2016-04-29 11:35:11

    15. Programming

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Programming Programming Compiling Codes on Hopper Cray provides a convenient set of wrapper commands that should be used in almost all cases for compiling and linking parallel programs. Invoking the wrappers will automatically link codes with the MPI libraries and other Cray system software libraries. All the MPI and Cray system include directories are also transparently imported. This page shows examples of how to compile codes on Franklin and Hopper. Read More » Shared and Dynamic Libraries

    16. Programming

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Programming Programming The genepool system has a diverse set of software development tools and a rich environment for delivering their functionality to users. Genepool has adopted a modular system which has been adapted from the Programming Environments similar to those provided on the Cray systems at NERSC. The Programming Environment is managed by a meta-module named similar to "PrgEnv-gnu/4.6". The "gnu" indicates that it is providing the GNU environment, principally GCC,

    17. Programming

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Storage & File Systems Application Performance Data & Analytics Job Logs & Statistics ... Each programming environment contains the full set of compatible compilers and libraries. ...

    18. Light Water Reactor Sustainability Program: Computer-based procedure for field activities: results from three evaluations at nuclear power plants

      SciTech Connect

      Oxstrand, Johanna; Bly, Aaron; LeBlanc, Katya

      2014-09-01

      Nearly all activities that involve human interaction with the systems of a nuclear power plant are guided by procedures. The paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety; however, improving procedure use could yield tremendous savings in increased efficiency and safety. One potential way to improve procedure-based activities is through the use of computer-based procedures (CBPs). Computer-based procedures provide the opportunity to incorporate context driven job aids, such as drawings, photos, just-in-time training, etc into CBP system. One obvious advantage of this capability is reducing the time spent tracking down the applicable documentation. Additionally, human performance tools can be integrated in the CBP system in such way that helps the worker focus on the task rather than the tools. Some tools can be completely incorporated into the CBP system, such as pre-job briefs, placekeeping, correct component verification, and peer checks. Other tools can be partly integrated in a fashion that reduces the time and labor required, such as concurrent and independent verification. Another benefit of CBPs compared to PBPs is dynamic procedure presentation. PBPs are static documents which limits the degree to which the information presented can be tailored to the task and conditions when the procedure is executed. The CBP system could be configured to display only the relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the user down the path of relevant steps based on the current conditions. This feature will reduce the user’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. As part of the Department of Energy’s (DOE) Light Water Reactors Sustainability Program

    19. PADLOC: a one-dimensional computer program for calculating coolant and plateout fission-product concentrations. Part 2

      SciTech Connect

      Hudritsch, W.W.

      1981-09-01

      The behavior of some of the prominent fission products along their convection pathways is dominated by the interaction of other species with them. This gave rise to the development of a plateout code capable of analyzing coupled species effects. The single species plateout computer program PADLOC is described in Part I of this report. The present Part II is concerned with the extension of PADLOC to MULTI*PADLOK, a multiple species version of PADLOC. MULTI*PADLOC is designed to analyze the time and one-dimensional spatial dependence of the concentrations of interacting (fission product) species in the carrier gas and on the surrounding wall surfaces on an arbitrary network of flow channels. The problem solved is one of mass transport of several impurity spceis in a gas, including the effects of sources in the gas and on the surface, convection along the flow paths, decay interaction, sorption interaction on the wall surfaces, and chemical reaction interactions in the gas and on the surfaces. These phenomena are governed by a system of coupled, nonlinear partial differential equations. The solution is achieved by: (a) linearizing the equations about an approximate solution and employing a Newton-Raphson iteration technique, (b) employing a finite difference solution method with an implicit time integration, and (c) employing a substructuring technique to logically organize the systems of equations for an abitrary flow network.

    20. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    1. Programming

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      using MPI and OpenMP on NERSC systems, the same does not always exist for other supported parallel programming models such as UPC or Chapel. At the same time, we know that these...

    2. Programming

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Programming Programming Compiling Codes There are three compiler suites available on Carver: Portland Group (PGI), Intel, and GCC. The PGI compilers are the default, to provide compatibility with other NERSC platforms. Read More » Using MKL Intel's Math Kernel Library (MKL) is a library of highly optimized, extensively threaded math routines optimized for Intel processors. Core math functions include BLAS, LAPACK, ScaLAPACK, Sparse Solvers, Fast Fourier Transforms, Vector Math, and more. It is

    3. Exascale Computing

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      DesignForward FastForward CAL Partnerships Shifter: User Defined Images Archive APEX TOKIO: Total Knowledge of I/O Home » R & D » Exascale Computing Exascale Computing Moving forward into the exascale era, NERSC users place will place increased demands on NERSC computational facilities. Users will be facing increased complexity in the memory subsystem and node architecture. System designs and programming models will have to evolve to face these new challenges. NERSC staff are active in

    4. Program

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Workshop Information: Material properties are determined by their structures or atomic arrangements. Three themes are emerging that offer unprecedented opportunities in static and transient material research and discoveries in the coming decade: high-energy X-ray free electron lasers (XFELs), high-performance imaging detector technology, and exascale computing. In structure determination, XFEL plays the role of information generation, imaging detectors the role of information collection, and

    5. Computer program for predicting surface subsidence resulting from pressure depletion in geopressured wells: subsidence prediction for the DOW test well No. 1, Parcperdue, Louisiana

      SciTech Connect

      Janssen, J.C.; Carver, D.R.; Bebout, D.G.; Bachman, A.L.

      1981-01-01

      The nucleus-of-strain concept is used to construct a computer program for predicting surface subsidence due to pressure reduction in geopressured reservoirs. Numerical integration allows one to compute the vertical displacement of the ground surface directly above and beyond the aquifer boundaries which results from the pressure reduction in each of the small finite volumes into which the aquifer is partitioned. The program treats depth (measured from the surface to the mean thickness of the aquifer) as a constant. Variation in aquifer thickness is accounted for by linear interpolation from one boundary to its opposite. In this simple model, subsidence is proportional to the pressure reduction (considered constant in this presentation) and to but one physical parameter, Cm(1-..nu..), in which Cm is its coefficient of uniaxial compaction, and ..nu.. is Poisson's ratio.

    6. SU-E-T-596: P3DVHStats - a Novel, Automatic, Institution Customizable Program to Compute and Report DVH Quantities On Philips Pinnacle TPS

      SciTech Connect

      Wu, C

      2015-06-15

      Purpose: To implement a novel, automatic, institutional customizable DVH quantities evaluation and PDF report tool on Philips Pinnacle treatment planning system (TPS) Methods: An add-on program (P3DVHStats) is developed by us to enable automatic DVH quantities evaluation (including both volume and dose based quantities, such as V98, V100, D2), and automatic PDF format report generation, for EMR convenience. The implementation is based on a combination of Philips Pinnacle scripting tool and Java language pre-installed on each Pinnacle Sun Solaris workstation. A single Pinnacle script provide user a convenient access to the program when needed. The activated script will first export DVH data for user selected ROIs from current Pinnacle plan trial; a Java program then provides a simple GUI interface, utilizes the data to compute any user requested DVH quantities, compare with preset institutional DVH planning goals; if accepted by users, the program will also generate a PDF report of the results and export it from Pinnacle to EMR import folder via FTP. Results: The program was tested thoroughly and has been released for clinical use at our institution (Pinnacle Enterprise server with both thin clients and P3PC access), for all dosimetry and physics staff, with excellent feedback. It used to take a few minutes to use MS-Excel worksheet to calculate these DVH quantities for IMRT/VMAT plans, and manually save them as PDF report; with the new program, it literally takes a few mouse clicks in less than 30 seconds to complete the same tasks. Conclusion: A Pinnacle scripting and Java language based program is successfully implemented, customized to our institutional needs. It is shown to dramatically reduce time and effort needed for DVH quantities computing and EMR reporting.

    7. Estimating pressurized water reactor decommissioning costs: A user`s manual for the PWR Cost Estimating Computer Program (CECP) software. Draft report for comment

      SciTech Connect

      Bierschbach, M.C.; Mencinsky, G.J.

      1993-10-01

      With the issuance of the Decommissioning Rule (July 27, 1988), nuclear power plant licensees are required to submit to the US Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. This user`s manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personnel computer, provides estimates for the cost of decommissioning PWR plant stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning.

    8. Estimating boiling water reactor decommissioning costs: A user`s manual for the BWR Cost Estimating Computer Program (CECP) software. Final report

      SciTech Connect

      Bierschbach, M.C.

      1996-06-01

      Nuclear power plant licensees are required to submit to the US Nuclear Regulatory Commission (NRC) for review their decommissioning cost estimates. This user`s manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personal computer, provides estimates for the cost of decommissioning boiling water reactor (BWR) power stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning.

    9. Estimating boiling water reactor decommissioning costs. A user`s manual for the BWR Cost Estimating Computer Program (CECP) software: Draft report for comment

      SciTech Connect

      Bierschbach, M.C.

      1994-12-01

      With the issuance of the Decommissioning Rule (July 27, 1988), nuclear power plant licensees are required to submit to the U.S. Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. This user`s manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personal computer, provides estimates for the cost of decommissioning BWR power stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning.

    10. DoE Early Career Research Program: Final Report: Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics

      SciTech Connect

      Farbin, Amir

      2015-07-15

      This is the final report of for DoE Early Career Research Program Grant Titled "Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics".

    11. DOE High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To Fund New Proposals To Jumpstart Energy Technologies

      Energy.gov [DOE]

      A new U.S. Department of Energy (DOE) program designed to spur the use of high performance supercomputers to advance U.S. manufacturing is now seeking a second round of proposals from industry to compete for approximately $3 million in new funding.

    12. Paul C. Messina | Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      He led the Computational and Computer Science component of Caltech's research project funded by the Academic Strategic Alliances Program of the Accelerated Strategic Computing ...

    13. Report from the Committee of Visitors on its Review of the Processes and Procedures used to Manage the Theory and Computations Program, Fusion Energy Sciences Advisory Committee

      SciTech Connect

      none,

      2004-03-01

      A Committee of Visitors (COV) was formed to review the procedures used by the Office of Fusion Energy Sciences to manage its Theory and Computations program. The COV was pleased to conclude that the research portfolio supported by the OFES Theory and Computations Program was of very high quality. The Program supports research programs at universities, research industries, and national laboratories that are well regarded internationally and address questions of high relevance to the DOE. A major change in the management of the Theory and Computations program over the past few years has been the introduction of a system of comparative peer review to guide the OFES Theory Team in selecting proposals for funding. The COV was impressed with the success of OFES in its implementation of comparative peer review and with the quality of the reviewers chosen by the OFES Theory Team. The COV concluded that the competitive peer review process has improved steadily over the three years that it has been in effect and that it has improved both the fairness and accountability of the proposal review process. While the COV commends OFES in its implementation of comparative review, the COV offers the following recommendations in the hope that they will further improve the comparative peer review process: The OFES should improve the consistency of peer reviews. We recommend adoption of a “results-oriented” scoring system in their guidelines to referees (see Appendix II), a greater use of review panels, and a standard format for proposals; The OFES should further improve the procedures and documentation for proposal handling. We recommend that the “folders” documenting funding decisions contain all the input from all of the reviewers, that OFES document their rationale for funding decisions which are at variance with the recommendation of the peer reviewers, and that OFES provide a Summary Sheet within each folder; The OFES should better communicate the procedures used to

    14. Computers for Learning

      Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    15. Program Managers

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Program Managers Program Managers Enabling remarkable discoveries and tools that transform our understanding of energy and matter and advance national, economic, and energy security. Advanced Scientific Computing Applied Mathematics: Pieter Swart, T-5 Computer Science: Pat McCormick, CCS-7 Computational Partnerships: Galen Shipman, CCS-7 Basic Energy Sciences Materials Sciences & Engineering: Toni Taylor, ADCLES-DO CINT National User Facility: Alex Lacerda, MPA-CINT (Acting Director)

    16. TURTLE with MAD input (Trace Unlimited Rays Through Lumped Elements) -- A computer program for simulating charged particle beam transport systems and DECAY TURTLE including decay calculations

      SciTech Connect

      Carey, D.C.

      1999-12-09

      TURTLE is a computer program useful for determining many characteristics of a particle beam once an initial design has been achieved, Charged particle beams are usually designed by adjusting various beam line parameters to obtain desired values of certain elements of a transfer or beam matrix. Such beam line parameters may describe certain magnetic fields and their gradients, lengths and shapes of magnets, spacings between magnetic elements, or the initial beam accepted into the system. For such purposes one typically employs a matrix multiplication and fitting program such as TRANSPORT. TURTLE is designed to be used after TRANSPORT. For convenience of the user, the input formats of the two programs have been made compatible. The use of TURTLE should be restricted to beams with small phase space. The lumped element approximation, described below, precludes the inclusion of the effect of conventional local geometric aberrations (due to large phase space) or fourth and higher order. A reading of the discussion below will indicate clearly the exact uses and limitations of the approach taken in TURTLE.

    17. Supercomputing Challenge Program Description

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      program that teaches mid school and high school students how to use powerful computers to model real-world problems and to explore computational approaches to their...

    18. Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      ... of energy's (Doe) Advanced Scientific Computing research program within the ... review by an international panel of experts ... The refereed journal articles and conference ...

    19. Computing Resources

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters ...

    20. Computational mechanics

      SciTech Connect

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    1. Final Report, Center for Programming Models for Scalable Parallel Computing: Co-Array Fortran, Grant Number DE-FC02-01ER25505

      SciTech Connect

      Robert W. Numrich

      2008-04-22

      The major accomplishment of this project is the production of CafLib, an 'object-oriented' parallel numerical library written in Co-Array Fortran. CafLib contains distributed objects such as block vectors and block matrices along with procedures, attached to each object, that perform basic linear algebra operations such as matrix multiplication, matrix transpose and LU decomposition. It also contains constructors and destructors for each object that hide the details of data decomposition from the programmer, and it contains collective operations that allow the programmer to calculate global reductions, such as global sums, global minima and global maxima, as well as vector and matrix norms of several kinds. CafLib is designed to be extensible in such a way that programmers can define distributed grid and field objects, based on vector and matrix objects from the library, for finite difference algorithms to solve partial differential equations. A very important extra benefit that resulted from the project is the inclusion of the co-array programming model in the next Fortran standard called Fortran 2008. It is the first parallel programming model ever included as a standard part of the language. Co-arrays will be a supported feature in all Fortran compilers, and the portability provided by standardization will encourage a large number of programmers to adopt it for new parallel application development. The combination of object-oriented programming in Fortran 2003 with co-arrays in Fortran 2008 provides a very powerful programming model for high-performance scientific computing. Additional benefits from the project, beyond the original goal, include a programto provide access to the co-array model through access to the Cray compiler as a resource for teaching and research. Several academics, for the first time, included the co-array model as a topic in their courses on parallel computing. A separate collaborative project with LANL and PNNL showed how to extend the

    2. Computing Frontier: Distributed Computing

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Computing Frontier: Distributed Computing and Facility Infrastructures Conveners: Kenneth Bloom 1 , Richard Gerber 2 1 Department of Physics and Astronomy, University of Nebraska-Lincoln 2 National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory 1.1 Introduction The field of particle physics has become increasingly reliant on large-scale computing resources to address the challenges of analyzing large datasets, completing specialized computations and

    3. Computational mechanics

      SciTech Connect

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    4. Advanced Simulation and Computing

      National Nuclear Security Administration (NNSA)

      NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction

    5. Parallel computing works

      SciTech Connect

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    6. Computing and Computational Sciences Directorate - Computer Science...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, ...

    7. Computation & Simulation > Theory & Computation > Research >...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...

    8. User's manual to the ICRP Code: a series of computer programs to perform dosimetric calculations for the ICRP Committee 2 report

      SciTech Connect

      Watson, S.B.; Ford, M.R.

      1980-02-01

      A computer code has been developed that implements the recommendations of ICRP Committee 2 for computing limits for occupational exposure of radionuclides. The purpose of this report is to describe the various modules of the computer code and to present a description of the methods and criteria used to compute the tables published in the Committee 2 report. The computer code contains three modules of which: (1) one computes specific effective energy; (2) one calculates cumulated activity; and (3) one computes dose and the series of ICRP tables. The description of the first two modules emphasizes the new ICRP Committee 2 recommendations in computing specific effective energy and cumulated activity. For the third module, the complex criteria are discussed for calculating the tables of committed dose equivalent, weighted committed dose equivalents, annual limit of intake, and derived air concentration.

    9. Parallel Computing Summer Research Internship

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Students Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Hai Ah Nam Email Program Co-Lead Kris Garrett Email Program Co-Lead Joseph Schoonover Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email 2016: Students Peter Ahrens Peter Ahrens Electrical Engineering & Computer Science BS UC Berkeley Fall 2016: MIT PhD program

    10. Parallel Computing Summer Research Internship

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      LaboratoryNational Security Education Center Menu About Seminar Series Summer Schools Workshops Viz Collab IS&T Projects NSEC » Information Science and Technology Institute (ISTI) » Summer School Programs » Parallel Computing Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Hai Ah Nam Email Program Co-Lead Kris Garrett Email Program Co-Lead Joseph

    11. Edison Electrifies Scientific Computing

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      ... Deployment of Edison was made possible in part by funding from DOE's Office of Science and the DARPA High Productivity Computing Systems program. DOE's Office of Science is the ...

    12. Exascale Computing Project

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Asteroid ALPINE project, xRage simulation Asteroid yA31 at a 45 degree angle of entry ExaSky particles ExaSky project Next-generation dark matter cosmology simulations Trinity computer Trinity: Advanced Technology System Advancing Predictive Capability for Stockpile Stewardship READ MORE Legion programming system Legion programming system Managing complex data and control dependencies READ MORE Meeting national security science challenges with reliable computing As part of the National Strategic

    13. Parallel Computing Summer Research Internship

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      should have basic experience with a scientific computing language, such as C, C++, Fortran and with the LINUX operating system. Duration & Location The program will last ten...

    14. Parallel Computing Summer Research Internship

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Recommended Reading & Resources Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead ...

    15. Program Activities | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      The Advanced Simulation and Computing program (ASC) is part of ... Office of Defense Programs. Defense Programs has six components: Research, ... at making the scientific and ...

    16. ARGONNE LEADERSHIP COMPUTING FACILITY

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      SCIENCE ARGONNE LEADERSHIP COMPUTING FACILITY On the cover Collapse of spherical cloud with 50,000 bubbles. The generation of microjets directed toward the cloud center causes the formation of cap-like bubble shapes. Image credit: Computational Science and Engineering Laboratory, ETH Zürich, Switzerland ARGONNE LEADERSHIP COMPUTING FACILITY 2016 SCIENCE HIGHLIGHTS 2016 ALCF SCIENCE HIGHLIGHTS 1 TABLE OF CONTENTS 4 About ALCF 5 Science Director's Message 6 Allocation and Application Programs

    17. Computer Architecture Lab

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      FastForward CAL Partnerships Shifter: User Defined Images Archive APEX TOKIO: Total Knowledge of I/O Home » R & D » Exascale Computing » CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in research and development into energy efficient and effective processor and memory architectures for DOE's Exascale program. CAL coordinates hardware architecture R&D activities across the DOE. CAL is a joint NNSA/SC activity involving Sandia National

    18. Computing and Computational Sciences Directorate - Computer Science...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      SECRETARIAL SUPPORT Winner: Lora Wolfe Organization: Computer Science and Mathematics Division Citation: For exemplary administrative secretarial support to the Computer Science ...

    19. Parallel Computing Summer Research Internship

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Recommended Reading & Resources Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Hai Ah Nam Email Program Co-Lead Kris Garrett Email Program Co-Lead Joseph Schoonover Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email Recommended Reading & Resources The Parallel Computing Summer Research Internship covers a broad range of

    20. SCC: The Strategic Computing Complex

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      SCC: The Strategic Computing Complex SCC: The Strategic Computing Complex The Strategic Computing Complex (SCC) is a secured supercomputing facility that supports the calculation, modeling, simulation, and visualization of complex nuclear weapons data in support of the Stockpile Stewardship Program. The 300,000-square-foot, vault-type building features an unobstructed 43,500-square-foot computer room, which is an open room about three-fourths the size of a football field. The Strategic Computing

    1. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

    2. TORO II: A finite element computer program for nonlinear quasi-static problems in electromagnetics: Part 2, User`s manual

      SciTech Connect

      Gartling, D.K.

      1996-05-01

      User instructions are given for the finite element, electromagnetics program, TORO II. The theoretical background and numerical methods used in the program are documented in SAND95-2472. The present document also describes a number of example problems that have been analyzed with the code and provides sample input files for typical simulations. 20 refs., 34 figs., 3 tabs.

    3. Method and system for knowledge discovery using non-linear statistical analysis and a 1st and 2nd tier computer program

      DOEpatents

      Hively, Lee M.

      2011-07-12

      The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.

    4. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

      SciTech Connect

      Zhou, Shujia; Duffy, Daniel; Clune, Thomas; Suarez, Max; Williams, Samuel; Halem, Milton

      2009-01-10

      The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratio of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.

    5. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

      SciTech Connect

      Xu, Tengfang; Flapper, Joris; Ke, Jing; Kramer, Klaas; Sathaye, Jayant

      2012-02-01

      The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variables affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and water

    6. Compute nodes

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2016-07-21 12:08:42

    7. Computer System,

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    8. JAC3D -- A three-dimensional finite element computer program for the nonlinear quasi-static response of solids with the conjugate gradient method; Yucca Mountain Site Characterization Project

      SciTech Connect

      Biffle, J.H.

      1993-02-01

      JAC3D is a three-dimensional finite element program designed to solve quasi-static nonlinear mechanics problems. A set of continuum equations describes the nonlinear mechanics involving large rotation and strain. A nonlinear conjugate gradient method is used to solve the equation. The method is implemented in a three-dimensional setting with various methods for accelerating convergence. Sliding interface logic is also implemented. An eight-node Lagrangian uniform strain element is used with hourglass stiffness to control the zero-energy modes. This report documents the elastic and isothermal elastic-plastic material model. Other material models, documented elsewhere, are also available. The program is vectorized for efficient performance on Cray computers. Sample problems described are the bending of a thin beam, the rotation of a unit cube, and the pressurization and thermal loading of a hollow sphere.

    9. Computing Sciences

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Computing Sciences Our Vision National User Facilities Research Areas In Focus Global Solutions ⇒ Navigate Section Our Vision National User Facilities Research Areas In Focus Global Solutions Computational Research Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. Scientific Networking

    10. EQ3NR, a computer program for geochemical aqueous speciation-solubility calculations: Theoretical manual, user`s guide, and related documentation (Version 7.0); Part 3

      SciTech Connect

      Wolery, T.J.

      1992-09-14

      EQ3NR is an aqueous solution speciation-solubility modeling code. It is part of the EQ3/6 software package for geochemical modeling. It computes the thermodynamic state of an aqueous solution by determining the distribution of chemical species, including simple ions, ion pairs, and complexes, using standard state thermodynamic data and various equations which describe the thermodynamic activity coefficients of these species. The input to the code describes the aqueous solution in terms of analytical data, including total (analytical) concentrations of dissolved components and such other parameters as the pH, pHCl, Eh, pe, and oxygen fugacity. The input may also include a desired electrical balancing adjustment and various constraints which impose equilibrium with special pure minerals, solid solution end-member components (of specified mole fractions), and gases (of specified fugacities). The code evaluates the degree of disequilibrium in terms of the saturation index (SI = 1og Q/K) and the thermodynamic affinity (A = {minus}2.303 RT log Q/K) for various reactions, such as mineral dissolution or oxidation-reduction in the aqueous solution itself. Individual values of Eh, pe, oxygen fugacity, and Ah (redox affinity) are computed for aqueous redox couples. Equilibrium fugacities are computed for gas species. The code is highly flexible in dealing with various parameters as either model inputs or outputs. The user can specify modification or substitution of equilibrium constants at run time by using options on the input file.

    11. History | Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Leadership Computing The Argonne Leadership Computing Facility (ALCF) was established at Argonne National Laboratory in 2004 as part of a U.S. Department of Energy (DOE) initiative dedicated to enabling leading-edge computational capabilities to advance fundamental discovery and understanding in a broad range of scientific and engineering disciplines. Supported by the Advanced Scientific Computing Research (ASCR) program within DOE's Office of Science, the ALCF is one half of the DOE Leadership

    12. An introduction to computer viruses

      SciTech Connect

      Brown, D.R.

      1992-03-01

      This report on computer viruses is based upon a thesis written for the Master of Science degree in Computer Science from the University of Tennessee in December 1989 by David R. Brown. This thesis is entitled An Analysis of Computer Virus Construction, Proliferation, and Control and is available through the University of Tennessee Library. This paper contains an overview of the computer virus arena that can help the reader to evaluate the threat that computer viruses pose. The extent of this threat can only be determined by evaluating many different factors. These factors include the relative ease with which a computer virus can be written, the motivation involved in writing a computer virus, the damage and overhead incurred by infected systems, and the legal implications of computer viruses, among others. Based upon the research, the development of a computer virus seems to require more persistence than technical expertise. This is a frightening proclamation to the computing community. The education of computer professionals to the dangers that viruses pose to the welfare of the computing industry as a whole is stressed as a means of inhibiting the current proliferation of computer virus programs. Recommendations are made to assist computer users in preventing infection by computer viruses. These recommendations support solid general computer security practices as a means of combating computer viruses.

    13. Probability of pipe fracture in the primary coolant loop of a PWR plant. Volume 9. PRAISE computer code user's manual. Load Combination Program Project I final report

      SciTech Connect

      Lim, E.Y.

      1981-06-01

      The PRAISE (Piping Reliability Analysis Including Seismic Events) computer code estimates the influence of earthquakes on the probability of failure at a weld joint in the primary coolant system of a pressurized water reactor. Failure, either a through-wall defect (leak) or a complete pipe severance (a large-LOCA), is assumed to be caused by fatigue crack growth of an as-fabricated interior surface circumferential defect. These defects are assumed to be two-dimensional and semi-elliptical in shape. The distribution of initial crack sizes is a function of crack depth and aspect ratio. PRAISE treats the inter-arrival times of operating transients either as a constant or exponentially distributed according to observed or postulated rates. Leak rate and leak detection models are also included. The criterion for complete pipe severance is exceedance of a net section critical stress. Earthquakes of various intensity and arbitrary occurrence times can be modeled. PRAISE presently assumes that exactly one initial defect exists in the weld and that the earthquake of interest is the first earthquake experienced at the reactor. PRAISE has a very modular structure and can be tailored to a variety of crack growth and piping reliability problems. Although PRAISE was developed on a CDC-7600 computer, it was, however, coded in standard FORTRAN IV and is readily transportable to other machines.

    14. Institutional computing (IC) information session

      SciTech Connect

      Koch, Kenneth R; Lally, Bryan R

      2011-01-19

      The LANL Institutional Computing Program (IC) will host an information session about the current state of unclassified Institutional Computing at Los Alamos, exciting plans for the future, and the current call for proposals for science and engineering projects requiring computing. Program representatives will give short presentations and field questions about the call for proposals and future planned machines, and discuss technical support available to existing and future projects. Los Alamos has started making a serious institutional investment in open computing available to our science projects, and that investment is expected to increase even more.

    15. ALGEBRA: a computer program that algebraically manipulates finite element output data. [In extended FORTRAN for CDC 7600 or CYBER 76 only

      SciTech Connect

      Richgels, M A; Biffle, J H

      1980-09-01

      ALGEBRA is a program that allows the user to process output data from finite-element analysis codes before they are sent to plotting routines. These data take the form of variable values (stress, strain, and velocity components, etc.) on a tape that is both the output tape from the analyses code and the input tape to ALGEBRA. The ALGEBRA code evaluates functions of these data and writes the function values on an output tape that can be used as input to plotting routines. Convenient input format and error detection capabilities aid the user in providing ALGEBRA with the functions to be evaluated. 1 figure.

    16. Computing Sciences Staff Help East Bay High Schoolers Upgrade...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      from underrepresented groups learn about careers in a variety of IT fields, the Laney College Computer Information Systems Department offered its Upgrade: Computer Science Program. ...

    17. Computer System, Cluster and Networking Summer Institute (CSCNSI...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      is a focused technical enrichment program targeting third-year college undergraduate students currently engaged in a computer science, computer engineering, or similar major. ...

    18. Demystifying computer code for northern New Mexico students

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Laboratory employees recently helped elementary, middle, and high school students in northern New Mexico try their hands at computer programming during the Computer Science ...

    19. Large Scale Production Computing and Storage Requirements for...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and ...

    20. Unsolicited Projects in 2012: Research in Computer Architecture...

      Office of Science (SC)

      Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I ...

    1. Parallel Computing Summer Research Internship

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Mentors Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Hai Ah Nam Email Program Co-Lead Kris Garrett Email Program Co-Lead Joseph Schoonover Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email 2016: Mentors Bob Robey Bob Robey XCP-2: EULERIAN CODES Bob Robey is a Research Scientist in the Eulerian Applications group at Los Alamos

    2. Parallel Computing Summer Research Internship

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Student Projects Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Hai Ah Nam Email Program Co-Lead Kris Garrett Email Program Co-Lead Joseph Schoonover Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email 2016: Student Projects Students are highly encouraged to present their summer research at the LANL Student Symposium poster

    3. Programming Libraries

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Libraries Programming Libraries ALTD Automatic Library Tracking Database Infrastructure To track and monitor library usage and better serve your software needs, we have enabled the Automatic Library Tracking Database (ALTD) on our prodcution systems, Hopper and Edison. ALTD is also availailable on Carver. ALTD, originally developed by National Institute for Computational Sciences and further developed at NERSC, automatically and transparently tracks all libraries linked into an application at

    4. Givens Summer Associate Program | Argonne National Laboratory

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Math and Computer Science Givens Summer Associate Program "Pure mathematics is, in its way, the poetry of logical ideas." - Albert Einstein About the Program The Mathematics and ...

    5. Computer Security

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      computer security Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved

    6. Compute Nodes

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    7. Computer Science and Information Technology Student Pipeline

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Science and Information Technology Student Pipeline Program Description Los Alamos National Laboratory's High Performance Computing and Information Technology Divisions recruit and hire promising undergraduate and graduate students in the areas of Computer Science, Information Technology, Management Information Systems, Computer Security, Software Engineering, Computer Engineering, and Electrical Engineering. Students are provided a mentor and challenging projects to demonstrate their

    8. Computer, Computational, and Statistical Sciences

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      CCS Computer, Computational, and Statistical Sciences Computational physics, computer science, applied mathematics, statistics and the integration of large data streams are central to scientific discovery and innovation. Leadership Division Leader Frank J. Alexander (Acting) Email Deputy Division Leader James Cooley (Acting) Email Earth climate map A single time step from an MPAS (Model for Prediction Across Scales) simulation, showing the temperature of the ocean. Building on research in human

    9. Computational Research and Theory (CRT) Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Computational Research and Theory (CRT) Facility Community Environmental Documents Tours Community Programs Friends of Berkeley Lab ⇒ Navigate Section Community Environmental Documents Tours Community Programs Friends of Berkeley Lab Project Description Wang Hall, previously the Computational Research and Theory Facility, is the new home for high performance computing at LBNL and houses the National Energy Research Scientific Computing Center (NERSC). NERSC supports DOE's mission to discover,

    10. TRAC-PF1/MOD1: an advanced best-estimate computer program for pressurized water reactor thermal-hydraulic analysis

      SciTech Connect

      Liles, D.R.; Mahaffy, J.H.

      1986-07-01

      The Los Alamos National Laboratory is developing the Transient Reactor Analysis Code (TRAC) to provide advanced best-estimate predictions of postulated accidents in light-water reactors. The TRAC-PF1/MOD1 program provides this capability for pressurized water reactors and for many thermal-hydraulic test facilities. The code features either a one- or a three-dimensional treatment of the pressure vessel and its associated internals, a two-fluid nonequilibrium hydrodynamics model with a noncondensable gas field and solute tracking, flow-regime-dependent constitutive equation treatment, optional reflood tracking capability for bottom-flood and falling-film quench fronts, and consistent treatment of entire accident sequences including the generation of consistent initial conditions. The stability-enhancing two-step (SETS) numerical algorithm is used in the one-dimensional hydrodynamics and permits this portion of the fluid dynamics to violate the material Courant condition. This technique permits large time steps and, hence, reduced running time for slow transients.

    11. SC e-journals, Computer Science

      Office of Scientific and Technical Information (OSTI)

      Computer Science ACM Letters on Programming Languages and Systems (LOPLAS) ACM Transactions on Applied Perception (TAP) ACM Transactions on Architecture and Code Optimization ...

    12. Computational Design of Interfaces for Photovoltaics | Argonne...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Computational Design of Interfaces for Photovoltaics PI Name: Noa Marom PI Email: nmarom@tulane.edu Institution: Tulane University Allocation Program: ALCC Allocation Hours at...

    13. Integrated Computational Materials Engineering (ICME) for Mg...

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      and Vehicle Technologies Program Annual Merit Review and Peer Evaluation PDF icon lm012li2011o.pdf More Documents & Publications Integrated Computational Materials Engineering ...

    14. high performance computing | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      high performance computing A story of tech transfer success: prize-winning innovation for HPC Last month, NNSA's Technology Transfer Program Manager for the Office of Strategic ...

    15. Computational Scientist | Princeton Plasma Physics Lab

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Department, with interest in leadership class computing of gyrokinetic fusion edge plasma research. A candidate who has knowledge in hybrid parallel programming with MPI, OpenMP,...

    16. Integrated Computational Materials Engineering (ICME) for Mg...

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Project (Part 1) Integrated Computational Materials Engineering (ICME) for Mg: International Pilot Project (Part 1) 2010 DOE Vehicle Technologies and Hydrogen Programs Annual Merit...

    17. Computing Events

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Laboratory (pdf) DOENNSA Laboratories Fulfill National Mission with Trinity and Cielo Petascale Computers (pdf) Exascale Co-design Center for Materials in Extreme...

    18. Computer Science

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    19. Computational Science

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      ... Advanced Materials Laboratory Center for Integrated Nanotechnologies Combustion Research Facility Computational Science Research Institute Joint BioEnergy Institute About EC News ...

    20. Computing and Computational Sciences Directorate - Contacts

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Home About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and...

    1. Computing and Computational Sciences Directorate - Divisions

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center ...

    2. 7th DOE workshop on computer-aided engineering

      SciTech Connect

      Not Available

      1991-01-01

      This report contains the abstracts and the program for the 7th DOE workshop on Computer-Aided Engineering. (LSP)

    3. Wind Energy Program: Top 10 Program Accomplishments | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Wind Energy Program: Top 10 Program Accomplishments Wind Energy Program: Top 10 Program Accomplishments Brochure on the top accomplishments of the Wind Energy Program, including the development of large wind machines, small machines for the residential market, wind tunnel testing, computer codes for modeling wind systems, high definition wind maps, and successful collaborations. top_10_wind_accomplishments (1.84 MB) More Documents & Publications Wind Program Accomplishments Wind Power Today,

    4. Seizure control with thermal energy? Modeling of heat diffusivity in brain tissue and computer-based design of a prototype mini-cooler.

      SciTech Connect

      Osario, I.; Chang, F.-C.; Gopalsami, N.; Nuclear Engineering Division; Univ. of Kansas

      2009-10-01

      Automated seizure blockage is a top priority in epileptology. Lowering nervous tissue temperature below a certain level suppresses abnormal neuronal activity, an approach with certain advantages over electrical stimulation, the preferred investigational therapy for pharmacoresistant seizures. A computer model was developed to identify an efficient probe design and parameters that would allow cooling of brain tissue by no less than 21 C in 30 s, maximum. The Pennes equation and the computer code ABAQUS were used to investigate the spatiotemporal behavior of heat diffusivity in brain tissue. Arrays of distributed probes deliver sufficient thermal energy to decrease, inhomogeneously, brain tissue temperature from 37 to 20 C in 30 s and from 37 to 15 C in 60 s. Tissue disruption/loss caused by insertion of this probe is considerably less than that caused by ablative surgery. This model may be applied for the design and development of cooling devices for seizure control.

    5. Radiological Worker Computer Based Training

      Energy Science and Technology Software Center

      2003-02-06

      Argonne National Laboratory has developed an interactive computer based training (CBT) version of the standardized DOE Radiological Worker training program. This CD-ROM based program utilizes graphics, animation, photographs, sound and video to train users in ten topical areas: radiological fundamentals, biological effects, dose limits, ALARA, personnel monitoring, controls and postings, emergency response, contamination controls, high radiation areas, and lessons learned.

    6. Introduction to computers: Reference guide

      SciTech Connect

      Ligon, F.V.

      1995-04-01

      The ``Introduction to Computers`` program establishes formal partnerships with local school districts and community-based organizations, introduces computer literacy to precollege students and their parents, and encourages students to pursue Scientific, Mathematical, Engineering, and Technical careers (SET). Hands-on assignments are given in each class, reinforcing the lesson taught. In addition, the program is designed to broaden the knowledge base of teachers in scientific/technical concepts, and Brookhaven National Laboratory continues to act as a liaison, offering educational outreach to diverse community organizations and groups. This manual contains the teacher`s lesson plans and the student documentation to this introduction to computer course.

    7. Computer-Aided Engineering for Electric Drive Vehicle Batteries (CAEBAT) (Presentation)

      SciTech Connect

      Pesaran, A. A.

      2011-05-01

      This presentation describes NREL's computer aided engineering program for electric drive vehicle batteries.

    8. Integrating Program Component Executables

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Integrating Program Component Executables on Distributed Memory Architectures via MPH Chris Ding and Yun He Computational Research Division, Lawrence Berkeley National Laboratory University of California, Berkeley, CA 94720, USA chqding@lbl.gov, yhe@lbl.gov Abstract A growing trend in developing large and complex ap- plications on today's Teraflop computers is to integrate stand-alone and/or semi-independent program components into a comprehensive simulation package. One example is the climate

    9. Compute Nodes

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Compute Nodes Compute Nodes There are currently 2632 nodes available on PDSF. The compute (batch) nodes at PDSF are heterogenous, reflecting the periodic procurement of new nodes (and the eventual retirement of old nodes). From the user's perspective they are essentially all equivalent except that some have more memory per job slot. If your jobs have memory requirements beyond the default maximum of 1.1GB you should specify that in your job submission and the batch system will run your job on an

    10. Computer Algebra System

      Energy Science and Technology Software Center

      1992-05-04

      DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less

    11. Compute Nodes

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    12. LHC Computing

      SciTech Connect

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    13. Multiprocessor programming environment

      SciTech Connect

      Smith, M.B.; Fornaro, R.

      1988-12-01

      Programming tools and techniques have been well developed for traditional uniprocessor computer systems. The focus of this research project is on the development of a programming environment for a high speed real time heterogeneous multiprocessor system, with special emphasis on languages and compilers. The new tools and techniques will allow a smooth transition for programmers with experience only on single processor systems.

    14. Programs & User Facilities

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Science Programs » Office of Science » Programs & User Facilities Programs & User Facilities Enabling remarkable discoveries, tools that transform our understanding of energy and matter and advance national, economic, and energy security Advanced Scientific Computing Research Applied Mathematics Co-Design Centers Exascale Co-design Center for Materials in Extreme Environments (ExMatEx) Center for Exascale Simulation of Advanced Reactors (CESAR) Center for Exascale Simulation of

    15. Parallel Programming with MPI | Argonne Leadership Computing...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Balaji Rajeev Thakur Ken Raffenetti Halim Amer Speaker(s) Title: Argonne National Laboratory, MCS Event Website: https:www.mcs.anl.gov%7Eraffenetpermalinksargonne16mpi.php ...

    16. Parallel programming with PCN

      SciTech Connect

      Foster, I.; Tuecke, S.

      1991-12-01

      PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

    17. Compute Nodes

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Compute Nodes Compute Nodes MC-proc.png Compute Node Configuration 6,384 nodes 2 twelve-core AMD 'MagnyCours' 2.1-GHz processors per node (see die image to the right and schematic below) 24 cores per node (153,216 total cores) 32 GB DDR3 1333-MHz memory per node (6,000 nodes) 64 GB DDR3 1333-MHz memory per node (384 nodes) Peak Gflop/s rate: 8.4 Gflops/core 201.6 Gflops/node 1.28 Peta-flops for the entire machine Each core has its own L1 and L2 caches, with 64 KB and 512KB respectively One 6-MB

    18. Parallel Computing Summer Research Internship

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Guide to Los Alamos Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Hai Ah Nam Email Program Co-Lead Kris Garrett Email Program Co-Lead Joseph Schoonover Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email Guide to Los Alamos During your 10-week internship, we hope you have the opportunity to explore and enjoy Los Alamos and the

    19. Requirements for supercomputing in energy research: The transition to massively parallel computing

      SciTech Connect

      Not Available

      1993-02-01

      This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

    20. Computing Resources

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Resources This page is the repository for sundry items of information relevant to general computing on BooNE. If you have a question or problem that isn't answered here, or a suggestion for improving this page or the information on it, please mail boone-computing@fnal.gov and we'll do our best to address any issues. Note about this page Some links on this page point to www.everything2.com, and are meant to give an idea about a concept or thing without necessarily wading through a whole website

    1. Computer-Aided Design of Materials for use under High Temperature Operating Condition

      SciTech Connect

      Rajagopal, K. R.; Rao, I. J.

      2010-01-31

      The procedures in place for producing materials in order to optimize their performance with respect to creep characteristics, oxidation resistance, elevation of melting point, thermal and electrical conductivity and other thermal and electrical properties are essentially trial and error experimentation that tend to be tremendously time consuming and expensive. A computational approach has been developed that can replace the trial and error procedures in order that one can efficiently design and engineer materials based on the application in question can lead to enhanced performance of the material, significant decrease in costs and cut down the time necessary to produce such materials. The work has relevance to the design and manufacture of turbine blades operating at high operating temperature, development of armor and missiles heads; corrosion resistant tanks and containers, better conductors of electricity, and the numerous other applications that are envisaged for specially structured nanocrystalline solids. A robust thermodynamic framework is developed within which the computational approach is developed. The procedure takes into account microstructural features such as the dislocation density, lattice mismatch, stacking faults, volume fractions of inclusions, interfacial area, etc. A robust model for single crystal superalloys that takes into account the microstructure of the alloy within the context of a continuum model is developed. Having developed the model, we then implement in a computational scheme using the software ABAQUS/STANDARD. The results of the simulation are compared against experimental data in realistic geometries.

    2. Programming Challenges Presentations | U.S. DOE Office of Science...

      Office of Science (SC)

      Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I ...

    3. CNL Programming Considerations on Franklin

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Programming » CNL Programming Considerations on Franklin CNL Programming Considerations on Franklin Shared Libraries (not supported) The Cray XT series currently do not support dynamic loading of executable code or shared libraries. Also, the related LD_PRELOAD environment variable is not supported. It is recommened to run Shared Library applications on Hopper. GNU C Runtime Library glibc Functions The light weight OS on the compute nodes, Compute Node Linux (CNL), is designed to optimize

    4. Hour of Code sparks interest in computer science

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      STEM skills Community Connections: Your link to news and opportunities from Los Alamos National Laboratory Latest Issue:November 2, 2016 all issues All Issues » submit Hour of Code sparks interest in computer science Taking the mystery out of programming February 1, 2016 Hour of Code participants work their way through fun computer programming tutorials. Hour of Code participants work their way through fun computer programming tutorials. Contacts Community Programs Director Kathy Keith Email

    5. Computational trigonometry

      SciTech Connect

      Gustafson, K.

      1994-12-31

      By means of the author`s earlier theory of antieigenvalues and antieigenvectors, a new computational approach to iterative methods is presented. This enables an explicit trigonometric understanding of iterative convergence and provides new insights into the sharpness of error bounds. Direct applications to Gradient descent, Conjugate gradient, GCR(k), Orthomin, CGN, GMRES, CGS, and other matrix iterative schemes will be given.

    6. Computational Structural Mechanics

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM ...

    7. Computing at JLab

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

    8. Computational Combustion

      SciTech Connect

      Westbrook, C K; Mizobuchi, Y; Poinsot, T J; Smith, P J; Warnatz, J

      2004-08-26

      Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surface and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.

    9. RATIO COMPUTER

      DOEpatents

      Post, R.F.

      1958-11-11

      An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.

    10. Computer System,

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory How to Apply Applications are due on or before December 1, 2016 Undergraduate and graduate students in computer science, engineering, and information technology related majors are encouraged to apply. Must be a U.S. Citizen. * Submit a current resume * Official university transcript (with Spring courses posted and/or a copy of Spring 2017 schedule) * Undergraduate 3.0 GPA minimum * Graduate

    11. GPU Computing - Dirac.pptx

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      GPU Computing with Dirac Hemant Shukla 2 Architectural Differences 2 ALU Cache DRAM Control Logic DRAM CPU GPU 512 cores 10s t o 1 00s o f t hreads p er c ore Latency i s h idden b y f ast c ontext switching Less t han 2 0 c ores 1---2 t hreads p er c ore Latency i s h idden b y l arge c ache 3 Programming Models 3 CUDA (Compute Unified Device Architecture) OpenCL Microsoft's DirectCompute Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, MATLAB, IDL, and

    12. Multiprocessor computing for images

      SciTech Connect

      Cantoni, V. ); Levialdi, S. )

      1988-08-01

      A review of image processing systems developed until now is given, highlighting the weak points of such systems and the trends that have dictated their evolution through the years producing different generations of machines. Each generation may be characterized by the hardware architecture, the programmability features and the relative application areas. The need for multiprocessing hierarchical systems is discussed focusing on pyramidal architectures. Their computational paradigms, their virtual and physical implementation, their programming and software requirements, and capabilities by means of suitable languages, are discussed.

    13. Supercomputing Challenge Program Description

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Supercomputers' Pictorial Superpowers Supercomputers' Pictorial Superpowers Addthis 1 of 7 The Energy Department's INCITE program, which stands for the "Innovative and Novel Computational Impact on Theory and Experiment," recently put out a report highlighting the ways our supercomputers are catalyzing discoveries and innovations. Above, computing provides an unparalleled ability to model and simulate Type Ia (thermonuclear-powered) supernovas. The ability to do 3D, large-scale

    14. Stewardship Science Graduate Fellowship Programs | National Nuclear

      National Nuclear Security Administration (NNSA)

      Security Administration | (NNSA) Home / content Stewardship Science Graduate Fellowship Programs The Computational Science Graduate Fellowship (CSGF) The Department of Energy Computational Science Graduate Fellowship program provides outstanding benefits and opportunities to students pursuing doctoral degrees in fields of study that use high performance computing to solve complex science and engineering problems. The program fosters a community of bright, energetic and committed Ph.D.

    15. Program Analysis

      Energy.gov [DOE]

      2011 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Vehicle Technologies Plenary

    16. University Program in Advanced Technology | National Nuclear...

      National Nuclear Security Administration (NNSA)

      ASC at the Labs Supercomputers University Partnerships Predictive Science Academic ... ASC Program Elements Facility Operations and User Support Computational Systems & Software ...

    17. Program Structure | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      ASC at the Labs Supercomputers University Partnerships Predictive Science Academic ... ASC Program Elements Facility Operations and User Support Computational Systems & Software ...

    18. Center for Computing Research Summer Research Proceedings 2015.

      SciTech Connect

      Bradley, Andrew Michael; Parks, Michael L.

      2015-12-18

      The Center for Computing Research (CCR) at Sandia National Laboratories organizes a summer student program each summer, in coordination with the Computer Science Research Institute (CSRI) and Cyber Engineering Research Institute (CERI).

    19. Bringing Advanced Computational Techniques to Energy Research

      SciTech Connect

      Mitchell, Julie C

      2012-11-17

      Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

    20. Argonne programming camp sparks students' scientific curiosity | Argonne

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      National Laboratory computer scientist Ti Leggett worked to design the computer programming curriculum, incorporating a mix of short lectures, computer time and hands-on activities. (Click image to view larger.) Argonne computer scientist Ti Leggett worked to design the computer programming curriculum, incorporating a mix of short lectures, computer time and hands-on activities. (Click image to view larger.) The group that attended this summer's coding camp posed with their teachers and camp

    1. Sandia National Laboratories: Advanced Simulation and Computing: Facilities

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Operation & User Support Facilities Operation & User Support APPRO The Facilities, Operations and User Support (FOUS) program is responsible for operating and maintaining the computing systems procured by the Advanced Simulation and Computing (ASC) program, and for delivering additional computing related services to Defense Program customers located across the Nuclear Weapons Complex. Sandia has developed a robust User Support capability which provides various services to analysts,

    2. Program Evaluation: Program Life Cycle

      Energy.gov [DOE]

      In general, different types of evaluation are carried out over different parts of a program's life cycle (e.g., Creating a program, Program is underway, or Closing out or end of program)....

    3. Computing and Computational Sciences Directorate - Joint Institute...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      JICS combines the experience and expertise in theoretical and computational science and engineering, computer science, and mathematics in these two institutions and focuses these ...

    4. Computational Systems & Software Environment | National Nuclear Security

      National Nuclear Security Administration (NNSA)

      Administration | (NNSA) Computational Systems & Software Environment The mission of this national sub-program is to build integrated, balanced, and scalable computational capabilities to meet the predictive simulation requirements of NNSA. This sub-program strives to provide users of ASC computing resources a stable and seamless computing environment for all ASC-deployed platforms. Along with these powerful systems that ASC will maintain and field the supporting software infrastructure

    5. Intro - High Performance Computing for 2015 HPC Annual Report

      SciTech Connect

      Klitsner, Tom

      2015-10-01

      The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

    6. Development of computer graphics

      SciTech Connect

      Nuttall, H.E.

      1989-07-01

      The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

    7. Avanced Large-scale Integrated Computational Environment

      Energy Science and Technology Software Center

      1998-10-27

      The ALICE Memory Snooper is a software applications programming interface (API) and library for use in implementing computational steering systems. It allows distributed memory parallel programs to publish variables in the computation that may be accessed over the Internet. In this way, users can examine and even change the variables in their running application remotely. The API and library ensure the consistency of the variables across the distributed memory system.

    8. Computational Fluid Dynamics

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video ...

    9. High Performance Computing

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    10. Large Scale Production Computing and Storage Requirements for...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Large Scale Production Computing and Storage Requirements for Nuclear Physics: Target 2017 ... The review brings together DOE Program Managers, leading domain scientists, and NERSC ...

    11. April 2013 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      April 2013 Most Viewed Documents for Mathematics And Computing Publications in biomedical and environmental sciences programs, 1981 Moody, J.B. (comp.) (1982) 306 A comparison of ...

    12. Previous Computer Science Award Announcements | U.S. DOE Office...

      Office of Science (SC)

      Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges ... Runtime Systems, and Tools .pdf file (214KB) at approximately 15 millionyear. ...

    13. ASCR Leadership Computing Challenge Requests for Time Due February...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      laboratories, academia and industry. This program allocates time at NERSC and the Leadership Computing Facilities at Argonne and Oak Ridge. Areas of interest are: Advancing...

    14. INCITE grants awarded to 56 computational research projects ...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      "The INCITE program drives some of the world's most ambitious and groundbreaking computational research in science and engineering," said James Hack, director of the National ...

    15. Program predicts waterflooding performance

      SciTech Connect

      Fassihi, M.R.; O'Brien, W.J.

      1987-04-01

      Water is a handheld calculator program for estimating waterflooding performance in a multilayered oil reservoir for patterns such as five-spot, direct line drive and staggered line drive. Topics considered in this paper include oil wells, sweep efficiency, well stimulation, computer calculations, stratification, enhanced recovery, calculators, reservoir rock, and reservoir engineering.

    16. User Advisory Council | Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      About Overview History Staff Directory Our Teams User Advisory Council Careers Margaret Butler Fellowship Visiting Us Contact Us User Advisory Council The User Advisory Council meets regularly to review major policies and to provide user feedback to the facility leadership. All council members are active Principal Investigators or users of ALCF computational resources through one or more of the allocation programs. Martin Berzins Professor Department of Computer Science Scientific Computing and

    17. Center for Computational Excellence | Argonne National Laboratory

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Center for Computational Excellence The Center for Computational Excellence (CCE) provides the connections, resources, and expertise that facilitate a more common HEP computing environment and when possible move away from experiment-specific software. This means helping members of the community connect to one another to avoid reinventing the wheel by find existing solutions or engineering experiment-independent solutions. HEP-CCE activity will take place under three types of programs. The first

    18. Science at ALCF | Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Science at ALCF Allocation Program - Any - Argonne Data Sceince Program INCITE ALCC ESP Director's Discretionary Year Year -Year 2008 2009 2010 2011 2012 2013 2014 2015 2016 Research Domain - Any - Physics Mathematics Computer Science Chemistry Earth Science Energy Technologies Materials Science Engineering Biological Sciences Apply

    19. Program evaluation: Weatherization Residential Assistance Partnership (WRAP) Program

      SciTech Connect

      Jacobson, Bonnie B.; Lundien, Barbara; Kaufman, Jeffrey; Kreczko, Adam; Ferrey, Steven; Morgan, Stephen

      1991-12-01

      The Weatherization Residential Assistance Partnership,'' or WRAP program, is a fuel-blind conservation program designed to assist Northeast Utilities' low-income customers to use energy safely and efficiently. Innovative with respect to its collaborative approach and its focus on utilizing and strengthening the existing low-income weatherization service delivery network, the WRAP program offers an interesting model to other utilities which traditionally have relied on for-profit energy service contractors and highly centralized program implementation structures. This report presents appendices with surveys, participant list, and computers program to examine and predict potential energy savings.

    20. Parallel programming with PCN

      SciTech Connect

      Foster, I.; Tuecke, S.

      1993-01-01

      PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

    1. Educational Programs

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Educational Programs Educational Programs The Lab provides a variety of focused educational programs aimed at the development and application of essential knowledge and skills in scientific fields key to our national security mission. Contacts Student Programs Team Leader Scott Robbins National Security Education Center 505-665-3639 Email Los Alamos Educational Programs Educational programs at Los Alamos combine significant hands-on group project experiences with more traditional classroom

    2. Weatherization Program

      Energy.gov [DOE]

      Residences participating in the Home Energy Rebate or New Home Rebate Program may not also participate in the Weatherization Program

    3. Program Administration

      Directives, Delegations, and Other Requirements [Office of Management (MA)]

      1997-08-21

      This volume describes program administration that establishes and maintains effective organizational management and control of the emergency management program. Canceled by DOE G 151.1-3.

    4. ASCR Workshop on Quantum Computing for Science

      SciTech Connect

      Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

      2015-06-01

      This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

    5. Reactor Safety Research Programs

      SciTech Connect

      Edler, S. K.

      1981-07-01

      This document summarizes the work performed by Pacific Northwest Laboratory (PNL) from January 1 through March 31, 1981, for the Division of Reactor Safety Research within the U.S. Nuclear Regulatory Commission (NRC). Evaluations of nondestructive examination (NDE) techniques and instrumentation are reported; areas of investigation include demonstrating the feasibility of determining the strength of structural graphite, evaluating the feasibility of detecting and analyzing flaw growth in reactor pressure boundary systems, examining NDE reliability and probabilistic fracture mechanics, and assessing the integrity of pressurized water reactor (PWR) steam generator tubes where service-induced degradation has been indicated. Experimental data and analytical models are being provided to aid in decision-making regarding pipeto- pipe impacts following postulated breaks in high-energy fluid system piping. Core thermal models are being developed to provide better digital codes to compute the behavior of full-scale reactor systems under postulated accident conditions. Fuel assemblies and analytical support are being provided for experimental programs at other facilities. These programs include loss-ofcoolant accident (LOCA) simulation tests at the NRU reactor, Chalk River, Canada; fuel rod deformation, severe fuel damage, and postaccident coolability tests for the ESSOR reactor Super Sara Test Program, Ispra, Italy; the instrumented fuel assembly irradiation program at Halden, Norway; and experimental programs at the Power Burst Facility, Idaho National Engineering Laboratory (INEL). These programs will provide data for computer modeling of reactor system and fuel performance during various abnormal operating conditions.

    6. Cosmic Reionization On Computers | Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Simulation of cosmic reionization Simulation of cosmic reionization. Dark red shows opaque neutral gas, transparent blue is ionized gas, and yellow dots are galaxies. Nick Gnedin, Fermilab Cosmic Reionization On Computers PI Name: Nickolay Gnedin PI Email: gnedin@fnal.gov Institution: Fermilab Allocation Program: INCITE Allocation Hours at ALCF: 65 Million Year: 2016 Research Domain: Physics Cosmic reionization, the most recent phase transition in the history of the universe, is the process by

    7. Ten Projects Awarded NERSC Allocations under DOE's ALCC Program

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Ten Projects Awarded NERSC Allocations under DOE's ALCC Program Ten Projects Awarded NERSC Allocations under DOE's ALCC Program June 24, 2014 43251113992ff3baa1edb NERSC Computer Room. Photo by Roy Kaltschmidt, LBNL Under the Department of Energy's (DOE) ASCR Leadership Computing Challenge (ALCC) program, 10 research teams at national laboratories and universities have been awarded 382.5 million hours of computing time at the National Energy Research Scientific Computing Center (NERSC). The

    8. Substation grounding programs

      SciTech Connect

      Meliopoulos, A.P.S. . Electric Power Lab.)

      1992-05-01

      This document is a users manual and applications guide for the software package SGA. This package comprises four computer programs, namely SOMIP, SMECC, SGSYS, and TGRND. The first three programs are analysis models which are to be used in the design process of substation grounding systems. The fourth program, TGRND, is an analysis program for determining the transient response of a grounding system. This report, Volume 5, is an applications guide of the three computer programs. SOMIP, SMECC, and SGSYS, for the purpose of designing a safe substation grounding system. The applications guide utilizes four example substation grounding systems for the purpose of illustrating the application of the programs, SOMIP, SMECC, and SGSYS. The examples are based on data provided by four contributing utilities, namely, Houston Lighting and Power Company, Southern Company Services, Puget Sound Power and Light Company, and Arizona Public Service Company. For the purpose of illustrating specific capabilities of the computer programs, the data have been modified. As a result, the final designs of the four systems do not necessarily represent actual grounding system designs by these utilities. The example system 1 is a 138 kV/35 kV distribution substation. The example system 2 is a medium size 230 kV/115 kV transmission substation. The third example system is a generation substation while the last is a large 525 kV/345 kV/230 kV transmission substation. The four examples cover most of the practical problems that a user may encounter in the design of substation grounding systems.

    9. Visiting Faculty Program Program Description

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Visiting Faculty Program Program Description The Visiting Faculty Program seeks to increase the research competitiveness of faculty members and their students at institutions historically underrepresented in the research community in order to expand the workforce vital to Department of Energy mission areas. As part of the program, selected university/college faculty members collaborate with DOE laboratory research staff on a research project of mutual interest. Program Objective The program is

    10. A Survey of Techniques for Approximate Computing

      DOE PAGES [OSTI]

      Mittal, Sparsh

      2016-03-18

      Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

    11. (Sparsity in large scale scientific computation)

      SciTech Connect

      Ng, E.G.

      1990-08-20

      The traveler attended a conference organized by the 1990 IBM Europe Institute at Oberlech, Austria. The theme of the conference was on sparsity in large scale scientific computation. The conference featured many presentations and other activities of direct interest to ORNL research programs on sparse matrix computations and parallel computing, which are funded by the Applied Mathematical Sciences Subprogram of the DOE Office of Energy Research. The traveler presented a talk on his work at ORNL on the development of efficient algorithms for solving sparse nonsymmetric systems of linear equations. The traveler held numerous technical discussions on issues having direct relevance to the research programs on sparse matrix computations and parallel computing at ORNL.

    12. Visiting Faculty Program Program Description

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      covers stipend and travel reimbursement for the 10-week program. Teacherfaculty participants: 1 Program Coordinator: Scott Robbins Email: srobbins@lanl.gov Phone number: 663-5621...

    13. Back to the ASCR Program Documents Page | U.S. DOE Office of Science (SC)

      Office of Science (SC)

      Program Documents » ASCR Program Documents Archive Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources Featured Content ASCR Discovery ASCR Program Documents ASCR Program Documents Archive HPC Workshop Series ASCR Workshops and Conferences Contact Information Advanced Scientific Computing Research U.S. Department of Energy

    14. Computational Electronics and Electromagnetics

      SciTech Connect

      DeFord, J.F.

      1993-03-01

      The Computational Electronics and Electromagnetics thrust area is a focal point for computer modeling activities in electronics and electromagnetics in the Electronics Engineering Department of Lawrence Livermore National Laboratory (LLNL). Traditionally, they have focused their efforts in technical areas of importance to existing and developing LLNL programs, and this continues to form the basis for much of their research. A relatively new and increasingly important emphasis for the thrust area is the formation of partnerships with industry and the application of their simulation technology and expertise to the solution of problems faced by industry. The activities of the thrust area fall into three broad categories: (1) the development of theoretical and computational models of electronic and electromagnetic phenomena, (2) the development of useful and robust software tools based on these models, and (3) the application of these tools to programmatic and industrial problems. In FY-92, they worked on projects in all of the areas outlined above. The object of their work on numerical electromagnetic algorithms continues to be the improvement of time-domain algorithms for electromagnetic simulation on unstructured conforming grids. The thrust area is also investigating various technologies for conforming-grid mesh generation to simplify the application of their advanced field solvers to design problems involving complicated geometries. They are developing a major code suite based on the three-dimensional (3-D), conforming-grid, time-domain code DSI3D. They continue to maintain and distribute the 3-D, finite-difference time-domain (FDTD) code TSAR, which is installed at several dozen university, government, and industry sites.

    15. TRIDAC host computer functional specification

      SciTech Connect

      Hilbert, S.M.; Hunter, S.L.

      1983-08-23

      The purpose of this document is to outline the baseline functional requirements for the Triton Data Acquisition and Control (TRIDAC) Host Computer Subsystem. The requirements presented in this document are based upon systems that currently support both the SIS and the Uranium Separator Technology Groups in the AVLIS Program at the Lawrence Livermore National Laboratory and upon the specific demands associated with the extended safe operation of the SIS Triton Facility.

    16. The Exascale Computing Project awards

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Project awards $34 million for software development November 10, 2016 OAK RIDGE, Tenn., Nov. 10, 2016 - The Department of Energy's Exascale Computing Project (ECP) today announced the selection of 35 software development proposals representing 25 research and academic organizations. The awards for the first year of funding total $34 million and cover many components of the software stack for exascale systems, including programming models and runtime libraries, mathematical libraries and

    17. NERSC HPC Program Requirements Reviews Overview

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Overview NERSC HPC Program Requirements Reviews Overview Scope These workshops are focused on determining the computational challenges facing research teams and the computational resources scientists will need to meet their research objectives. The goal is to assure that NERSC, the DOE Office of Science, and its program offices, will be able to provide the high performance computing and storage resources necessary to support the Office of Science's scientific goals. The merits of the scientific

    18. CAP Program Guidance | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      CAP Program Guidance CAP Program Guidance In 2002, the Department of Energy signed an interagency agreement with the Department of Defense's Computer/Electronic Accommodations Program (CAP) program to provide assistive/adaptive technology free of charge to DOE employees with disabilities. The following information regarding CAP is being provided to assist federal employees, managers and on- site disability coordinators with the CAP application process. CAP Program Guidance (40.26 KB) Responsible

    19. GPU COMPUTING FOR PARTICLE TRACKING

      SciTech Connect

      Nishimura, Hiroshi; Song, Kai; Muriki, Krishna; Sun, Changchun; James, Susan; Qin, Yong

      2011-03-25

      This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculation of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.

    20. Results from computational and experimental modeling of runaway electron damage on plasma facing components

      SciTech Connect

      Niemer, K.A.; Gilligan, J.G.; Croessmann, C.D.

      1994-11-01

      The purpose of this research was to extend the theoretical and experimental knowledge of runaway electron damage-impact-bombardment on plasma facing components and materials in magnetic fusion devices. The emphasis of this work involved computational modeling and experimental studies to investigate runaway electron energy deposition and thermal response in plasma facing materials. The goals were: (1) to develop a computational model to study and analyze runaway electron damage; (2) to characterize runaway electron parameters; and (3) to perform experiments to analyze runaway electron damage. These goals were accomplished by first assembling the PTA code package. PTA is a unique application of PATRAN, the Integrated TIGER Series (ITS), and ABAQUS for modeling high energy electron impact on magnetic fusion materials and components. The PTA code package provides a three-dimensional, time dependent, computational code package which predicts material response from runaway bombardment under most runaway conditions (i.e., electron energy, incident angle, energy density, and deposition time). As part of this research, PTA was used to study energy deposition and material response in several design applications, to analyze damaged material, and to analyze several experiments. Runaway electron characterization was determined through parametric studies, analysis of damaged materials, and analysis of experimental results. Characterization provided information on electron energy, incident angle, current, deposition time, and volume of material impacted by runaway electrons. Finally an experiment was performed on the Advanced Toroidal Facility (ATF) at Oak Ridge National Laboratory to study runaway electron damage. The experiment provided information on the runaway electron energy and current in ATF, as well as supplemented the existing experimental knowledge of runaway electron damage.

    1. ESnet Program Plan 1994

      SciTech Connect

      Merola, S.

      1994-11-01

      This Program Plan characterizes ESnet with respect to the current and future needs of Energy Research programs for network infrastructure, services, and development. In doing so, this document articulates the vision and recommendations of the ESnet Steering Committee regarding ESnet`s development and its support of computer networking facilities and associated user services. To afford the reader a perspective from which to evaluate the ever-increasing utility of networking to the Energy Research community, we have also provided a historical overview of Energy Research networking. Networking has become an integral part of the work of DOE principal investigators, and this document is intended to assist the Office of Scientific Computing in ESnet program planning and management, including prioritization and funding. In particular, we identify the new directions that ESnet`s development and implementation will take over the course of the next several years. Our basic goal is to ensure that the networking requirements of the respective scientific programs within Energy Research are addressed fairly. The proliferation of regional networks and additional network-related initiatives by other Federal agencies is changing the process by which we plan our own efforts to serve the DOE community. ESnet provides the Energy Research community with access to many other peer-level networks and to a multitude of other interconnected network facilities. ESnet`s connectivity and relationship to these other networks and facilities are also described in this document. Major Office of Energy Research programs are managed and coordinated by the Office of Basic Energy Sciences, the Office of High Energy and Nuclear Physics, the Office of Magnetic Fusion Energy, the Office of Scientific Computing, and the Office of Health and Environmental Research. Summaries of these programs are presented, along with their functional and technical requirements for wide-area networking.

    2. Applications of Parallel Computers

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course...

    3. Previous Computer Science Award Announcements | U.S. DOE Office of Science

      Office of Science (SC)

      (SC) Previous Computer Science Award Announcements Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing

    4. Light Water Reactor Sustainability Program - Integrated Program...

      Energy Saver

      Program - Integrated Program Plan Light Water Reactor Sustainability Program - Integrated Program Plan The Light Water Reactor Sustainability (LWRS) Program is a research and ...

    5. ASCR Leadership Computing Challenge Requests for Time Due February 14

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Requests for Time Due February 14 ASCR Leadership Computing Challenge Requests for Time Due February 14 November 17, 2011 by Francesca Verdier The ASCR Leadership Computing Challenge (ALCC) program is open to scientists from the research community in national laboratories, academia and industry. This program allocates time at NERSC and the Leadership Computing Facilities at Argonne and Oak Ridge. Areas of interest are: Advancing the clean energy agenda. Understanding the environmental impacts of

    6. Theory, Modeling and Computation

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Theory, Modeling and Computation Theory, Modeling and Computation The sophistication of modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme computing. CONTACT Jack Shlachter (505) 665-1888 Email Extreme Computing to Power Accurate Atomistic Simulations Advances in high-performance computing and theory allow longer and larger atomistic simulations than currently possible.

    7. Computer hardware fault administration

      DOEpatents

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    8. Computational Earth Science

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental ...

    9. Applied & Computational Math

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

    10. Molecular Science Computing | EMSL

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    11. Argonne's Laboratory computing center - 2007 annual report.

      SciTech Connect

      Bair, R.; Pieper, G. W.

      2008-05-28

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and

    12. SC11 Education Program Applications due July 31

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      SC11 Education Program Applications due July 31 SC11 Education Program Applications due July 31 June 9, 2011 by Francesca Verdier Applications for the Education Program are now being accepted. Submission website: https://submissions.supercomputing.org Applications deadline: Sunday, July 31, 2011 Acceptance Notifications: Monday, August 22, 2011 The Education Program is hosting a four-day intensive program that will immerse participants in High Performance Computing (HPC) and Computational and

    13. HVAC Program

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      New Commercial Program Development Commercial Current Promotions Industrial Federal Agriculture Heating Ventilation and Air Conditioning Energy efficient Heating Ventilation and...

    14. Retiree Program

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Library Services » Retiree Program Retiree Program The Research Library offers a 1 year library card to retired LANL employees that allows usage of Library materials. This service is only available to retired LANL employees. Who is eligible? Any Laboratory retiree, not participating in any other program (ie, Guest Scientist, Affiliate). Upon completion of your application, you will be notified of your acceptance into the program. This does not include past students. What is the term of the

    15. Computing for Finance

      SciTech Connect

      2010-03-24

      remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons

    16. Computing for Finance

      ScienceCinema

      None

      2016-07-12

      remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons

    17. Computing for Finance

      ScienceCinema

      None

      2011-10-06

      with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons

    18. Computing for Finance

      ScienceCinema

      None

      2011-10-06

      with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons

    19. TORCH Computational Reference Kernels - A Testbed for Computer Science Research

      SciTech Connect

      Kaiser, Alex; Williams, Samuel Webb; Madduri, Kamesh; Ibrahim, Khaled; Bailey, David H.; Demmel, James W.; Strohmaier, Erich

      2010-12-02

      For decades, computer scientists have sought guidance on how to evolve architectures, languages, and programming models in order to improve application performance, efficiency, and productivity. Unfortunately, without overarching advice about future directions in these areas, individual guidance is inferred from the existing software/hardware ecosystem, and each discipline often conducts their research independently assuming all other technologies remain fixed. In today's rapidly evolving world of on-chip parallelism, isolated and iterative improvements to performance may miss superior solutions in the same way gradient descent optimization techniques may get stuck in local minima. To combat this, we present TORCH: A Testbed for Optimization ResearCH. These computational reference kernels define the core problems of interest in scientific computing without mandating a specific language, algorithm, programming model, or implementation. To compliment the kernel (problem) definitions, we provide a set of algorithmically-expressed verification tests that can be used to verify a hardware/software co-designed solution produces an acceptable answer. Finally, to provide some illumination as to how researchers have implemented solutions to these problems in the past, we provide a set of reference implementations in C and MATLAB.

    20. High performance computing and communications: FY 1997 implementation plan

      SciTech Connect

      1996-12-01

      The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

    1. Computational Science and Engineering

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Computational Science and Engineering NETL's Computational Science and Engineering competency consists of conducting applied scientific research and developing physics-based simulation models, methods, and tools to support the development and deployment of novel process and equipment designs. Research includes advanced computations to generate information beyond the reach of experiments alone by integrating experimental and computational sciences across different length and time scales. Specific

    2. Computing and Computational Sciences Directorate - Information...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      cost-effective, state-of-the-art computing capabilities for research and development. ... communicates and manages strategy, policy and finance across the portfolio of IT assets. ...

    3. Large Scale Production Computing and Storage Requirements for Fusion Energy

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences" is organized by the Department of Energy's Office of Fusion Energy Sciences (FES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to

    4. Large Scale Production Computing and Storage Requirements for High Energy

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Physics: Target 2017 Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 HEPlogo.jpg The NERSC Program Requirements Review "Large Scale Computing and Storage Requirements for High Energy Physics" is organized by the Department of Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to characterize

    5. Unsolicited Projects in 2012: Research in Computer Architecture, Modeling,

      Office of Science (SC)

      and Evolving MPI for Exascale | U.S. DOE Office of Science (SC) 2: Research in Computer Architecture, Modeling, and Evolving MPI for Exascale Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities

    6. Computational Spectroscopy of Heterogeneous Interfaces | Argonne Leadership

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Computing Facility Complex interfaces between nanoparticles and a solvent Complex interfaces between nanoparticles and a solvent. N. Brawand, University of Chicago Computational Spectroscopy of Heterogeneous Interfaces PI Name: Giulia Galli PI Email: gagalli@uchicago.edu Institution: University of Chicago Allocation Program: INCITE Allocation Hours at ALCF: 150 Million Year: 2016 Research Domain: Materials Science The interfaces between solids, nanoparticles and liquids play a fundamental

    7. Web Articles | Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      In the News Supercomputers' Pit Crews DOE Office of Science QMC simulations reveal magnetic properties of titanium oxide material Phys.org INCITE Program Awards 5.78 Billion Hours to 55 Computational Research Projects HPCWire more news Web Articles Volume rendering from a 3D core-collapse supernova simulation Supercomputing Award of 5.78 Billion Hours to 55 Computational Research Projects The U.S. Department of Energy's Office of Science announced 55 projects with high potential for accelerating

    8. ALCF Acknowledgment Policy | Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      ALCF Acknowledgment Policy As a U.S. Department of Energy user facility dedicated to the advancement of scientific discoveries, the Argonne Leadership Computing Facility (ALCF) provides unique computing resources and expertise to a user community that is bound by certain policies designed to acknowledge and promote the work of others as well as the resources used to accomplish this work. The ALCF requests your continued compliance with the terms of your program or discretionary award,

    9. High Performance Computing Data Center Metering Protocol

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      High Performance Computing Data Center Metering Protocol Prepared for: U.S. Department of Energy Office of Energy Efficiency and Renewable Energy Federal Energy Management Program Prepared by: Thomas Wenning Michael MacDonald Oak Ridge National Laboratory September 2010 ii Introduction Data centers in general are continually using more compact and energy intensive central processing units, but the total number and size of data centers continues to increase to meet progressive computing

    10. Computing for Finance

      ScienceCinema

      None

      2011-10-06

      with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons

    11. Exploring HPCS Languages in Scientific Computing

      SciTech Connect

      Barrett, Richard F; Alam, Sadaf R; de Almeida, Valmor F; Bernholdt, David E; Elwasif, Wael R; Kuehn, Jeffery A; Poole, Stephen W; Shet, Aniruddha G

      2008-01-01

      As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else.

    12. Science at ALCF | Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Science at ALCF Allocation Program - Any - Argonne Data Sceince Program INCITE ALCC ESP Director's Discretionary Year Year -Year 2008 2009 2010 2011 2012 2013 2014 2015 2016 Research Domain - Any - Physics Mathematics Computer Science Chemistry Earth Science Energy Technologies Materials Science Engineering Biological Sciences Apply sort descending An example of a Category 5 hurricane simulated by the CESM at 13 km resolution Accelerated Climate Modeling for Energy Mark Taylor, Sandia National

    13. Scalable Computer Performance and Analysis (Hierarchical INTegration)

      Energy Science and Technology Software Center

      1999-09-02

      HINT is a program to measure a wide variety of scalable computer systems. It is capable of demonstrating the benefits of using more memory or processing power, and of improving communications within the system. HINT can be used for measurement of an existing system, while the associated program ANALYTIC HINT can be used to explain the measurements or as a design tool for proposed systems.

    14. Final Report: Programming Models for Shared Memory Clusters

      SciTech Connect

      May, J.; de Supinski, B.; Pudliner, B.; Taylor, S.; Baden, S.

      2000-01-04

      Most large parallel computers now built use a hybrid architecture called a shared memory cluster. In this design, a computer consists of several nodes connected by an interconnection network. Each node contains a pool of memory and multiple processors that share direct access to it. Because shared memory clusters combine architectural features of shared memory computers and distributed memory computers, they support several different styles of parallel programming or programming models. (Further information on the design of these systems and their programming models appears in Section 2.) The purpose of this project was to investigate the programming models available on these systems and to answer three questions: (1) How easy to use are the different programming models in real applications? (2) How do the hardware and system software on different computers affect the performance of these programming models? (3) What are the performance characteristics of different programming models for typical LLNL applications on various shared memory clusters?

    15. Towards Energy-Centric Computing and Computer Architecture

      SciTech Connect

      2011-02-09

      Technology forecasts indicate that device scaling will continue well into the next decade. Unfortunately, it is becoming extremely difficult to harness this increase in the number of transistorsinto performance due to a number of technological, circuit, architectural, methodological and programming challenges.In this talk, I will argue that the key emerging showstopper is power. Voltage scaling as a means to maintain a constant power envelope with an increase in transistor numbers is hitting diminishing returns. As such, to continue riding the Moore's law we need to look for drastic measures to cut power. This is definitely the case for server chips in future datacenters,where abundant server parallelism, redundancy and 3D chip integration are likely to remove programming, reliability and bandwidth hurdles, leaving power as the only true limiter.I will present results backing this argument based on validated models for future server chips and parameters extracted from real commercial workloads. Then I use these results to project future research directions for datacenter hardware and software.About the speakerBabak Falsafi is a Professor in the School of Computer and Communication Sciences at EPFL, and an Adjunct Professor of Electrical and Computer Engineering and Computer Science at Carnegie Mellon. He is thefounder and the director ofthe Parallel Systems Architecture Laboratory (PARSA) at EPFL where he conducts research onarchitectural support for parallel programming, resilient systems, architectures to break the memory wall, and analytic and simulation tools for computer system performance evaluation.In 1999, in collaboration with T. N. Vijaykumar he showed for the first time that, contrary to conventional wisdom,multiprocessors do not needrelaxed memory consistency models (and the resulting convoluted programming interfaces found and used in modern systems) to achieve high performance. He is a recipient of an NSF CAREER award in 2000

    16. Thermal battery statistics and plotting programs

      SciTech Connect

      Scharrer, G.L.

      1990-04-01

      Thermal battery functional test data are stored in an HP3000 minicomputer operated by the Power Sources Department. A program was written to read data from a battery data base, compute simple statistics (mean, minimum, maximum, standard deviation, and K-factor), print out the results, and store the data in a file for subsequent plotting. A separate program was written to plot the data. The programs were written in the Pascal programming language. 1 tab.

    17. Computational Nanophotonics: Model Optical Interactions and Transport in Tailored Nanosystem Architectures

      SciTech Connect

      Stockman, Mark; Gray, Steven

      2014-02-21

      The program is directed toward development of new computational approaches to photoprocesses in nanostructures whose geometry and composition are tailored to obtain desirable optical responses. The emphasis of this specific program is on the development of computational methods and prediction and computational theory of new phenomena of optical energy transfer and transformation on the extreme nanoscale (down to a few nanometers).

    18. Extreme Scale Computing to Secure the Nation

      SciTech Connect

      Brown, D L; McGraw, J R; Johnson, J R; Frincke, D

      2009-11-10

      Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national security requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of nuclear

    19. Thermal Hydraulic Computer Code System.

      Energy Science and Technology Software Center

      1999-07-16

      Version 00 RELAP5 was developed to describe the behavior of a light water reactor (LWR) subjected to postulated transients such as loss of coolant from large or small pipe breaks, pump failures, etc. RELAP5 calculates fluid conditions such as velocities, pressures, densities, qualities, temperatures; thermal conditions such as surface temperatures, temperature distributions, heat fluxes; pump conditions; trip conditions; reactor power and reactivity from point reactor kinetics; and control system variables. In addition to reactor applications,more » the program can be applied to transient analysis of other thermal‑hydraulic systems with water as the fluid. This package contains RELAP5/MOD1/029 for CDC computers and RELAP5/MOD1/025 for VAX or IBM mainframe computers.« less

    20. Program Description

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Program Description SAGE, the Summer of Applied Geophysical Experience, is a unique educational program designed to introduce students in geophysics and related fields to "hands on" geophysical exploration and research. The program emphasizes both teaching of field methods and research related to basic science and a variety of applied problems. SAGE is hosted by the National Security Education Center and the Earth and Environmental Sciences Division of the Los Alamos National

    1. Polymorphous computing fabric

      DOEpatents

      Wolinski, Christophe Czeslaw; Gokhale, Maya B.; McCabe, Kevin Peter

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    2. IMPACTS: Industrial Technologies Program, Summary of Program...

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      IMPACTS: Industrial Technologies Program, Summary of Program Results for CY2009 IMPACTS: Industrial Technologies Program, Summary of Program Results for CY2009 ...

    3. Light Water Reactor Sustainability Program - Integrated Program...

      Energy Saver

      Light Water Reactor Sustainability Program - Integrated Program Plan Light Water Reactor Sustainability Program - Integrated Program Plan The Light Water Reactor Sustainability ...

    4. HSS Voluntary Protection Program: Articles

      Energy.gov [DOE]

      AJHA Program - The Automated Job Hazard Analysis (AJHA) computer program is part of an enhanced work planning process employed at the Department of Energy's Hanford worksite. The AJHA system is routinely used to performed evaluations for medium and high risk work, and in the development of corrective maintenance work packages at the site. The tool is designed to ensure that workers are fully involved in identifying the hazards, requirements, and controls associated with tasks.

    5. Counterintelligence Program

      Directives, Delegations, and Other Requirements [Office of Management (MA)]

      1992-09-04

      To establish the policies, procedures, and specific responsibilities for the Department of Energy (DOE) Counterintelligence (CI) Program. This directive does not cancel any other directive.

    6. Programming Stage

      Directives, Delegations, and Other Requirements [Office of Management (MA)]

      1997-05-21

      This chapter addresses plans for the acquisition and installation of operating environment hardware and software and design of a training program.

    7. Program Description

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      their potential and pursue opportunities in science, technology, engineering and mathematics. Through Expanding Your Horizon (EYH) Network programs, we provide STEM role models...

    8. exercise program

      National Nuclear Security Administration (NNSA)

      and dispose of many different hazardous substances, including radioactive materials, toxic chemicals, and biological agents and toxins.

      There are a few programs NNSA uses...

    9. Program Description

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Applied Geophysical Experience, is a unique educational program designed to introduce students in geophysics and related fields to "hands on" geophysical exploration and research....

    10. Volunteer Program

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      National VolunteerMatch Retired and Senior Volunteer Program United Way of Northern New Mexico United Way of Santa Fe County Giving Employee Giving Campaign Holiday Food Drive...

    11. Special Programs

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      and Application Center for Hydrogen Energy Research Programs ARPA-E Basic Energy Sciences ... Sea State Contour) Code Online Abstracts and Reports Water Power Personnel ...

    12. Counterintelligence Program

      Directives, Delegations, and Other Requirements [Office of Management (MA)]

      2004-12-10

      The Order establishes Counterintelligence Program requirements and responsibilities for the Department of Energy, including the National Nuclear Security Administration. Supersedes DOE 5670.3.

    13. Program Description

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Program Description Inspiring girls to recognize their potential and pursue opportunities in science, technology, engineering and mathematics. Through Expanding Your Horizon (EYH) ...

    14. Special Programs

      Office of Energy Efficiency and Renewable Energy (EERE)

      Headquarters Human Resources Operations promotes a variety of hiring flexibilities for managers to attract a diverse workforce, from Student Internship Program opportunities (Pathways), Veteran...

    15. PACKAGE (Plasma Analysis, Chemical Kinetics and Generator Efficiency): a computer program for the calculation of partial chemical equilibrium/partial chemical rate controlled composition of multiphased mixtures under one dimensional steady flow

      SciTech Connect

      Yousefian, V.; Weinberg, M.H.; Haimes, R.

      1980-02-01

      The NASA CEC Code was the starting point for PACKAGE, whose function is to evaluate the composition of a multiphase combustion product mixture under the following chemical conditions: (1) total equilibrium with pure condensed species; (2) total equilibrium with ideal liquid solution; (3) partial equilibrium/partial finite rate chemistry; and (4) fully finite rate chemistry. The last three conditions were developed to treat the evolution of complex mixtures such as coal combustion products. The thermodynamic variable pairs considered are either pressure (P) and enthalpy, P and entropy, at P and temperature. Minimization of Gibbs free energy is used. This report gives detailed discussions of formulation and input/output information used in the code. Sample problems are given. The code development, description, and current programming constraints are discussed. (DLC)

    16. NV Energy -Energy Smart Schools Program | Department of Energy

      Energy.gov [DOE] (indexed site)

      pending approval Vending Machine Controls Personal Computing Equipment Program Info Sector Name Utility Administrator Nevada Power Company Website http:www.nvenergy.com...

    17. Energy Conservation Program for Consumer Products and Certain...

      Energy Saver

      Energy Conservation Program for Consumer Products and Certain Commercial and Industrial Equipment: Proposed Determination of Computer Servers as a Covered Consumer Product, ...

    18. Low latency, high bandwidth data communications between compute nodes in a parallel computer

      DOEpatents

      Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

      2010-11-02

      Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

    19. An Arbitrary Precision Computation Package

      Energy Science and Technology Software Center

      2003-06-14

      This package permits a scientist to perform computations using an arbitrarily high level of numeric precision (the equivalent of hundreds or even thousands of digits), by making only minor changes to conventional C++ or Fortran-90 soruce code. This software takes advantage of certain properties of IEEE floating-point arithmetic, together with advanced numeric algorithms, custom data types and operator overloading. Also included in this package is the "Experimental Mathematician's Toolkit", which incorporates many of these facilitiesmore » into an easy-to-use interactive program.« less

    20. Programming Challenges Workshop | U.S. DOE Office of Science (SC)

      Office of Science (SC)

      Programming Challenges Workshop Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC)

    1. Programming Challenges Abstracts | U.S. DOE Office of Science (SC)

      Office of Science (SC)

      Programming Challenges Abstracts and Biographies Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory

    2. Programming Challenges Presentations | U.S. DOE Office of Science (SC)

      Office of Science (SC)

      Programming Challenges Presentations Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee

    3. Overview of the Defense Programs Research and Technology Development Program for fiscal year 1993. Appendix materials

      SciTech Connect

      Not Available

      1993-09-30

      The pages that follow contain summaries of the nine R&TD Program Element Plans for Fiscal Year 1993 that were completed in the Spring of 1993. The nine program elements are aggregated into three program clusters as follows: Design Sciences and Advanced Computation; Advanced Manufacturing Technologies and Capabilities; and Advanced Materials Sciences and Technology.

    4. Webinar: AspireIT K-12 Outreach Program

      Office of Energy Efficiency and Renewable Energy (EERE)

      AspireIT K-12 Outreach Program is a grant that connects high school and college women with K-12 girls interested in computing. Using a near-peer model, program leaders teach younger girls...

    5. Cognitive Computing for Security.

      SciTech Connect

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    6. Computers in Commercial Buildings

      Energy Information Administration (EIA) (indexed site)

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    7. Apply for the Parallel Computing Summer Research Internship

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      How to Apply Apply for the Parallel Computing Summer Research Internship Creating next-generation leaders in HPC research and applications development Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Hai Ah Nam Email Program Co-Lead Kris Garrett Email Program Co-Lead Joseph Schoonover Email Professional Staff Assistant Nicole Aguilar Garcia (505) 665-3048 Email Application deadline is January 27, 2017 with notification by mid-February 2017. Who can apply? Upper division undergraduate

    8. Student Internship Programs Program Description

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Student Internship Programs Program Description The objective of the Laboratory's student internship programs is to provide students with opportunities for meaningful hands- on experience supporting educational progress in their selected scientific or professional fields. The most significant impact of these internship experiences is observed in the intellectual growth experienced by the participants. Student interns are able to appreciate the practical value of their education efforts in their

    9. Cupola Furnace Computer Process Model

      SciTech Connect

      Seymour Katz

      2004-12-31

      The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

    10. Computers-BSA.ppt

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Energy Computers, Electronics and Electrical Equipment (2010 MECS) Computers, Electronics and Electrical Equipment (2010 MECS) Manufacturing Energy and Carbon Footprint for Computers, Electronics and Electrical Equipment Sector (NAICS 334, 335) Energy use data source: 2010 EIA MECS (with adjustments) Footprint Last Revised: February 2014 View footprints for other sectors here. Manufacturing Energy and Carbon Footprint Computers, Electronics and Electrical Equipment (123.71 KB) More Documents

    11. Advanced Scientific Computing Research

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Galen Shipman (505) 665-4021 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    12. Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      ARGONNE LEADERSHIP COMPUTING FACILITY The 10-petaflops Mira supercomputer The Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility, provides its user community with computing time and staff support to pursue significant breakthroughs in science and engineering. The ALCF is one of two DOE leadership computing facilities in the nation dedicated to open science. www.alcf.anl.gov ENABLING SCIENCE With hundreds of thousands of processors

    13. Tutorial on computer control

      SciTech Connect

      Juras, R.C.

      1987-09-01

      This paper discusses computer architecture modfications and development used to control particle accelerators. 6 refs., 3 figs.

    14. Computing Resources | Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      The facility has 25,000 square feet of raised computer floor space and a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. The building also ...

    15. Fermilab | Science at Fermilab | Computing | Grid Computing

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      which would collect more data than any computing center in existence could process. ... consortium grid called Open Science Grid, so they initiated a project known as FermiGrid. ...

    16. Program Description | Robotics Internship Program

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      March 4, 2016. Apply Now for the Robotics Internship About the Internship Program Description Start of Appointment Renewal of Appointment End of Appointment Stipend Information...

    17. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

      SciTech Connect

      Corones, James

      2013-09-23

      High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

    18. Mathematical and Computational Epidemiology

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    19. Computing environment logbook

      DOEpatents

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    20. BNL ATLAS Grid Computing

      ScienceCinema

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    1. NNSA releases Stockpile Stewardship Program quarterly experiments summary |

      National Nuclear Security Administration (NNSA)

      National Nuclear Security Administration | (NNSA) releases Stockpile Stewardship Program quarterly experiments summary May 12, 2015 WASHIGTON, DC. - The National Nuclear Security Administration today released its current quarterly summary of experiments conducted as part of its science-based Stockpile Stewardship Program. The experiments carried out within the program are used in combination with complex computational models and NNSA's Advanced Simulation and Computing (ASC) Program to

    2. Parallel computing in enterprise modeling.

      SciTech Connect

      Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

      2008-08-01

      This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

    3. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    4. Scalable optical quantum computer

      SciTech Connect

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    5. Final Report: Correctness Tools for Petascale Computing

      SciTech Connect

      Mellor-Crummey, John

      2014-10-27

      In the course of developing parallel programs for leadership computing systems, subtle programming errors often arise that are extremely difficult to diagnose without tools. To meet this challenge, University of Maryland, the University of Wisconsin—Madison, and Rice University worked to develop lightweight tools to help code developers pinpoint a variety of program correctness errors that plague parallel scientific codes. The aim of this project was to develop software tools that help diagnose program errors including memory leaks, memory access errors, round-off errors, and data races. Research at Rice University focused on developing algorithms and data structures to support efficient monitoring of multithreaded programs for memory access errors and data races. This is a final report about research and development work at Rice University as part of this project.

    6. NNSA?s Computing Strategy, Acquisition Plan, and Basis for Computing Time Allocation

      SciTech Connect

      Nikkel, D J

      2009-07-21

      This report is in response to the Omnibus Appropriations Act, 2009 (H.R. 1105; Public Law 111-8) in its funding of the National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) Program. This bill called for a report on ASC's plans for computing and platform acquisition strategy in support of stockpile stewardship. Computer simulation is essential to the stewardship of the nation's nuclear stockpile. Annual certification of the country's stockpile systems, Significant Finding Investigations (SFIs), and execution of Life Extension Programs (LEPs) are dependent on simulations employing the advanced ASC tools developed over the past decade plus; indeed, without these tools, certification would not be possible without a return to nuclear testing. ASC is an integrated program involving investments in computer hardware (platforms and computing centers), software environments, integrated design codes and physical models for these codes, and validation methodologies. The significant progress ASC has made in the past derives from its focus on mission and from its strategy of balancing support across the key investment areas necessary for success. All these investment areas must be sustained for ASC to adequately support current stockpile stewardship mission needs and to meet ever more difficult challenges as the weapons continue to age or undergo refurbishment. The appropriations bill called for this report to address three specific issues, which are responded to briefly here but are expanded upon in the subsequent document: (1) Identify how computing capability at each of the labs will specifically contribute to stockpile stewardship goals, and on what basis computing time will be allocated to achieve the goal of a balanced program among the labs. (2) Explain the NNSA's acquisition strategy for capacity and capability of machines at each of the labs and how it will fit within the existing budget constraints. (3) Identify the technical

    7. Semiconductor Device Analysis on Personal Computers

      Energy Science and Technology Software Center

      1993-02-08

      PC-1D models the internal operation of bipolar semiconductor devices by solving for the concentrations and quasi-one-dimensional flow of electrons and holes resulting from either electrical or optical excitation. PC-1D uses the same detailed physical models incorporated in mainframe computer programs, yet runs efficiently on personal computers. PC-1D was originally developed with DOE funding to analyze solar cells. That continues to be its primary mode of usage, with registered copies in regular use at more thanmore » 100 locations worldwide. The program has been successfully applied to the analysis of silicon, gallium-arsenide, and indium-phosphide solar cells. The program is also suitable for modeling bipolar transistors and diodes, including heterojunction devices. Its easy-to-use graphical interface makes it useful as a teaching tool as well.« less

    8. Program Overview

      Energy.gov [DOE]

      The culture of the DOE community will be based on standards. Technical standards will formally integrate part of all DOE facility, program and project activities. The DOE will be recognized as a...

    9. Integrated Program

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Program Review (IPR) Quarterly Business Review (QBR) Access to Capital Debt Management July 2013 Aug. 2013 Sept. 2013 Oct. 2013 Nov. 2013 Dec. 2013 Jan. 2014 Feb. 2014 March...

    10. Deconvolution Program

      Energy Science and Technology Software Center

      1999-02-18

      The program is suitable for a lot of applications in applied mathematics, experimental physics, signal analytical system and some engineering applications range i.e. deconvolution spectrum, signal analysis and system property analysis etc.

    11. Programming models

      SciTech Connect

      Daniel, David J; Mc Pherson, Allen; Thorp, John R; Barrett, Richard; Clay, Robert; De Supinski, Bronis; Dube, Evi; Heroux, Mike; Janssen, Curtis; Langer, Steve; Laros, Jim

      2011-01-14

      A programming model is a set of software technologies that support the expression of algorithms and provide applications with an abstract representation of the capabilities of the underlying hardware architecture. The primary goals are productivity, portability and performance.

    12. Quality Program

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      ... for Preparing QA Project Plans." 8.4 American Society of Mechanical Engineers (ASME)NQA-1, "Quality Assurance Program Requirements for Nuclear Facilities." 8.5 American Nuclear ...

    13. Science Programs

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      The focal point for basic and applied R&D programs with a primary focus on energy but also encompassing medical, biotechnology, high-energy physics, and advanced scientific ...

    14. Program Analyst

      Energy.gov [DOE]

      A successful candidate in this position will serve as an Program Analyst for the System Operations team in the area of regulatory compliance. The successful candidate will also become a subject...

    15. GEO3D - Three-Dimensional Computer Model of a Ground Source Heat Pump System

      DOE Data Explorer

      James Menart

      2013-06-07

      This file is the setup file for the computer program GEO3D. GEO3D is a computer program written by Jim Menart to simulate vertical wells in conjunction with a heat pump for ground source heat pump (GSHP) systems. This is a very detailed three-dimensional computer model. This program produces detailed heat transfer and temperature field information for a vertical GSHP system.

    16. Volunteer Program

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Volunteer Program Volunteer Program Our good neighbor pledge includes active employee engagement in our communities through volunteering. More than 3,000 current and retired Lab employees have logged more than 1.8 million volunteer hours since 2007. August 19, 2015 Los Alamos National Laboratory employee volunteers with Mountain Canine Corps Lab employee Debbi Miller volunteers for the Mountain Canine Corps with her search and rescue dogs. She also volunteers with another search and rescue

    17. Program Summaries

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Program Summaries Basic Energy Sciences (BES) BES Home About Research Facilities Science Highlights Benefits of BES Funding Opportunities Basic Energy Sciences Advisory Committee (BESAC) Community Resources Program Summaries Brochures Reports Accomplishments Presentations BES and Congress Science for Energy Flow Seeing Matter Nano for Energy Scale of Things Chart Contact Information Basic Energy Sciences U.S. Department of Energy SC-22/Germantown Building 1000 Independence Ave., SW Washington,

    18. Educational Programs

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Educational Programs Educational Programs A collaboration between Los Alamos National Laboratory and the University of California at San Diego (UCSD) Jacobs School of Engineering Contact Institute Director Charles Farrar (505) 663-5330 Email UCSD EI Director Michael Todd (858) 534-5951 Professional Staff Assistant Ellie Vigil (505) 667-2818 Email Administrative Assistant Rebecca Duran (505) 665-8899 Email There are two educational components to the Engineering Institute. The Los Alamos Dynamic

    19. Parallel programming with Ada

      SciTech Connect

      Kok, J.

      1988-01-01

      To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.

    20. Overview of Computer-Aided Engineering of Batteries (CAEBAT) and

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Introduction to Multi-Scale, Multi-Dimensional (MSMD) Modeling of Lithium-Ion Batteries | Department of Energy Computer-Aided Engineering of Batteries (CAEBAT) and Introduction to Multi-Scale, Multi-Dimensional (MSMD) Modeling of Lithium-Ion Batteries Overview of Computer-Aided Engineering of Batteries (CAEBAT) and Introduction to Multi-Scale, Multi-Dimensional (MSMD) Modeling of Lithium-Ion Batteries 2012 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit

    1. Computer System, Cluster and Networking Summer Institute (CSCNSI)

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      LaboratoryNational Security Education Center Menu About Seminar Series Summer Schools Workshops Viz Collab IS&T Projects NSEC » Information Science and Technology Institute (ISTI) » Summer School Programs » CSCNSI Computer System, Cluster and Networking Summer Institute Emphasizes practical skills development Contacts Program Lead Carolyn Connor (505) 665-9891 Email Professional Staff Assistant Nicole Aguilar Garcia (505) 665-3048 Email Purpose The Computer System, Cluster, and Networking

    2. Mira Computational Readiness Assessment | Argonne Leadership Computing

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Energy Minority Serving Institution Technical Consortium Model Minority Serving Institution Technical Consortium Model In October 2012, the National Nuclear Security Administration (NNSA) awarded $4 million in grants to 22 Historically Black Colleges and Universities (HBCUs) in key STEM areas. This funding launched NNSA's new Minority Serving Institution Partnership Program, a consortium program organized to build a sustainable STEM pipeline between six Energy Department plants and

    3. An Integrated Development Environment for Adiabatic Quantum Programming

      SciTech Connect

      Humble, Travis S; McCaskey, Alex; Bennink, Ryan S; Billings, Jay Jay; D'Azevedo, Eduardo; Sullivan, Blair D; Klymko, Christine F; Seddiqi, Hadayat

      2014-01-01

      Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation engine that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.

    4. Sandia National Laboratories: Rebooting computing

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      neural computing to extend Moore's Law Image credit: NICHDS. Jeong. Facebook ... Sandia explores neural computing to extend Moore's Law Computation is stuck in a rut. The ...

    5. Sandia Energy - High Performance Computing

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    6. Software Systems for High-performance Quantum Computing

      SciTech Connect

      Humble, Travis S; Britt, Keith A

      2016-01-01

      Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

    7. High performance computing and communications: FY 1996 implementation plan

      SciTech Connect

      1995-05-16

      The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

    8. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect

      GLIMM,J.

      2002-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    9. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect

      GLIMM,J.

      2001-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    10. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect

      GLIMM,J.

      2003-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    11. Berkeley Lab Opens State-of-the-Art Facility for Computational...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Complementing NERSC and ESnet in the facility will be research programs in applied mathematics and computer science that develop new methods for advancing scientific discovery. ...

    12. Debugging automation tools based on event grammars and computations over traces

      SciTech Connect

      Auguston, M.

      1997-11-01

      This report contains viewgraphs which purpose to research and design software testing and debugging automation tools like a language for computations over source program execution history.

    13. Argonne Leadership Computing Facility A R G O N N E L E A D

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      ... through Advanced Computing (SciDAC) program. ... Journal of Geophysical Research Atmospheres, May 2013, John Wiley & Sons, ... Molecular Physics: An International Journal at the ...

    14. Plasma Simulation Program

      SciTech Connect

      Greenwald, Martin

      2011-10-04

      Many others in the fusion energy and advanced scientific computing communities participated in the development of this plan. The core planning team is grateful for their important contributions. This summary is meant as a quick overview the Fusion Simulation Program's (FSP's) purpose and intentions. There are several additional documents referenced within this one and all are supplemental or flow down from this Program Plan. The overall science goal of the DOE Office of Fusion Energy Sciences (FES) Fusion Simulation Program (FSP) is to develop predictive simulation capability for magnetically confined fusion plasmas at an unprecedented level of integration and fidelity. This will directly support and enable effective U.S. participation in International Thermonuclear Experimental Reactor (ITER) research and the overall mission of delivering practical fusion energy. The FSP will address a rich set of scientific issues together with experimental programs, producing validated integrated physics results. This is very well aligned with the mission of the ITER Organization to coordinate with its members the integrated modeling and control of fusion plasmas, including benchmarking and validation activities. [1]. Initial FSP research will focus on two critical Integrated Science Application (ISA) areas: ISA1, the plasma edge; and ISA2, whole device modeling (WDM) including disruption avoidance. The first of these problems involves the narrow plasma boundary layer and its complex interactions with the plasma core and the surrounding material wall. The second requires development of a computationally tractable, but comprehensive model that describes all equilibrium and dynamic processes at a sufficient level of detail to provide useful prediction of the temporal evolution of fusion plasma experiments. The initial driver for the whole device model will be prediction and avoidance of discharge-terminating disruptions, especially at high performance, which are a critical

    15. The HILDA program

      SciTech Connect

      Close, E.; Fong, C; Lee, E.

      1991-10-30

      Although this report is called a program document, it is not simply a user's guide to running HILDA nor is it a programmer's guide to maintaining and updating HILDA. It is a guide to HILDA as a program and as a model for designing and costing a heavy ion fusion (HIF) driver. HILDA represents the work and ideas of many people; as does the model upon which it is based. The project was initiated by Denis Keefe, the leader of the LBL HIFAR project. He suggested the name HILDA, which is an acronym for Heavy Ion Linac Driver Analysis. The conventions and style of development of the HILDA program are based on the original goals. It was desired to have a computer program that could estimate the cost and find an optimal design for Heavy Ion Fusion induction linac drivers. This program should model near-term machines as well as fullscale drivers. The code objectives were: (1) A relatively detailed, but easily understood model. (2) Modular, structured code to facilitate making changes in the model, the analysis reports, and the user interface. (3) Documentation that defines, and explains the system model, cost algorithm, program structure, and generated reports. With this tool a knowledgeable user would be able to examine an ensemble of drivers and find the driver that is minimum in cost, subject to stated constraints. This document contains a report section that describes how to use HILDA, some simple illustrative examples, and descriptions of the models used for the beam dynamics and component design. Associated with this document, as files on floppy disks, are the complete HILDA source code, much information that is needed to maintain and update HILDA, and some complete examples. These examples illustrate that the present version of HILDA can generate much useful information about the design of a HIF driver. They also serve as guides to what features would be useful to include in future updates. The HPD represents the current state of development of this project.

    16. Programming in Fortran M

      SciTech Connect

      Foster, I.; Olson, R.; Tuecke, S.

      1993-08-01

      Fortran M is a small set of extensions to Fortran that supports a modular approach to the construction of sequential and parallel programs. Fortran M programs use channels to plug together processes which may be written in Fortran M or Fortran 77. Processes communicate by sending and receiving messages on channels. Channels and processes can be created dynamically, but programs remain deterministic unless specialized nondeterministic constructs are used. Fortran M programs can execute on a range of sequential, parallel, and networked computers. This report incorporates both a tutorial introduction to Fortran M and a users guide for the Fortran M compiler developed at Argonne National Laboratory. The Fortran M compiler, supporting software, and documentation are made available free of charge by Argonne National Laboratory, but are protected by a copyright which places certain restrictions on how they may be redistributed. See the software for details. The latest version of both the compiler and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/fortran-m at info.mcs.anl.gov.

    17. Student Internship Programs Program Description

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      for a summer high school student to 75,000 for a Ph.D. student working full-time for a year. Program Coordinator: Scott Robbins Email: srobbins@lanl.gov Phone number: 663-5621...

    18. Computer-Aided Engineering of Batteries for Designing Better Li-Ion Batteries (Presentation)

      SciTech Connect

      Pesaran, A.; Kim, G. H.; Smith, K.; Lee, K. J.; Santhanagopalan, S.

      2012-02-01

      This presentation describes the current status of the DOE's Energy Storage R and D program, including modeling and design tools and the Computer-Aided Engineering for Automotive Batteries (CAEBAT) program.

    19. Vehicle Technologies Office Merit Review 2013: Accelerating Predictive Simulation of IC Engines with High Performance Computing

      Energy.gov [DOE]

      Presentation given by Oak Ridge National Laboratory at the 2013 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting about simulating internal combustion engines using high performance computing.

    20. Development of Computer-Aided Design Tools for Automotive Batteries |

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Department of Energy 8_hartridge_2012_o.pdf (1.32 MB) More Documents & Publications Progress of Computer-Aided Engineering of Batteries (CAEBAT) Vehicle Technologies Office Merit Review 2014: Development of Computer-Aided Design Tools for Automotive Batteries Review of A123s HEV and PHEV USABC Programs

    1. Scientific Cloud Computing Misconceptions

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Scientific Cloud Computing Misconceptions Scientific Cloud Computing Misconceptions July 1, 2011 Part of the Magellan project was to understand both the possibilities and the limitations of cloud computing in the pursuit of science. At a recent conference, Magellan investigator Shane Canon outlined some persistent misconceptions about doing science in the cloud - and what Magellan has taught us about them. » Read the ISGTW story. » Download the slides (PDF, 4.1MB

    2. NERSC Computer Security

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or modification. Among NERSC's security goal are: 1. To protect NERSC systems from unauthorized access. 2. To prevent the interruption of services to its users. 3. To prevent misuse or abuse of NERSC resources. Security Incidents If you think there has been a computer security incident you should contact NERSC Security as soon as

    3. Edison Electrifies Scientific Computing

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    4. Student teams showcase year-long computing projects

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Student teams showcase year-long computing projects Student teams showcase year-long computing projects The Challenge is project-based learning geared to teaching a wide range of skills: research, writing, teamwork, time management, oral presentations and computer programming. April 19, 2016 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience,

    5. Applied Computer Science

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader ...

    6. computational fluid dynamics

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      computational fluid dynamics - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary ...

    7. Personal Computer Inventory System

      Energy Science and Technology Software Center

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    8. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      60 Years of Computing 60 Years of Computing

    9. Postdoctoral Program Program Description The Postdoctoral (Postdoc...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Postdoctoral Program Program Description The Postdoctoral (Postdoc) Research program offers the opportunity for appointees to perform research in a robust scientific R&D...

    10. Machinist Pipeline/Apprentice Program Program Description

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Machinist PipelineApprentice Program Program Description The Machinist Pipeline Program was created by the Prototype Fabrication Division to fill a critical need for skilled ...

    11. 2011 Computation Directorate Annual Report

      SciTech Connect

      Crawford, D L

      2012-04-11

      From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilities and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated

    12. The Macolumn - the Mac gets geophysical. [A review of geophysical software for the Apple Macintosh computer

      SciTech Connect

      Busbey, A.B. )

      1990-02-01

      Seismic Processing Workshop, a program by Parallel Geosciences of Austin, TX, is discussed in this column. The program is a high-speed, interactive seismic processing and computer analysis system for the Apple Macintosh II family of computers. Also reviewed in this column are three products from Wilkerson Associates of Champaign, IL. SubSide is an interactive program for basin subsidence analysis; MacFault and MacThrustRamp are programs for modeling faults.

    13. Guidelines for Academic Cooperation Program (ACP)

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      academic cooperation program Guidelines for Academic Cooperation Program (ACP) It is the responsibility of all Academic Cooperation Program (ACP) users to: Understand their task(s) Understand the potential hazards associated with their experiment(s) Comply fully with all LLNL safety and computer security regulations and procedures. Incoming ACPs must work under mandatory line-of-sight supervision at, and above, the Work Authorization Level B per ES&H Manual Document 2.2, Table 2 on page 7.

    14. Cori Phase 1 Training: Programming and Optimization

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Optimization Cori Phase 1 Training: Programming and Optimization NERSC will host a four-day training event for Cori Phase 1 users on Programming Environment, Debugging and Optimization from Monday June 13 to Thursday June 16. The presenters will be Cray instructor Rick Slick and NERSC staff. Cray XC Series Programming and Optimization Description This course is intended for people who work in applications support or development of Cray XC Series computer systems. It familiarizes students with

    15. Software and High Performance Computing

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Computational physics, computer science, applied mathematics, statistics and the ... a fully operational supercomputing environment Providing Current Capability Scientific ...

    16. Overview of the Defense Programs Research and Technology Development Program for Fiscal Year 1993

      SciTech Connect

      Not Available

      1993-09-30

      This documents presents a programmatic overview and program element plan summaries for conceptual design and assessment; physics; computation and modeling; system engineering science and technology; electronics, photonics, sensors, and mechanical components; chemistry and materials; special nuclear materials, tritium, and explosives.

    17. ELECTRONIC DIGITAL COMPUTER

      DOEpatents

      Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.

      1957-10-01

      The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.

    18. Computer Processor Allocator

      Energy Science and Technology Software Center

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    19. Advanced Scientific Computing Research Network Requirements

      SciTech Connect

      Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

      2013-03-08

      The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

    20. Reconnection methods for an arbitrary polyhedral computational grid

      SciTech Connect

      Rasskazova, V.V.; Sofronov, I.D.; Shaporenko, A.N.; Burton, D.E.; Miller, D.S.

      1996-08-01

      The paper suggests a method for local reconstructions of a 3D irregular computational grid and the algorithm of its program implementation. Two grid reconstruction operations are used as basic: paste of two cells having a common face and cut of a certain cell into two by a given plane. This paper presents criteria to use one or another operation, the criteria are analyzed. A program for local reconstruction of a 3D irregular grid is used to conduct two test computations and the computed results are given.

    1. Student science enrichment training program

      SciTech Connect

      Sandhu, S.S.

      1994-08-01

      This is a report on the Student Science Enrichment Training Program, with special emphasis on chemical and computer science fields. The residential summer session was held at the campus of Claflin College, Orangeburg, SC, for six weeks during 1993 summer, to run concomitantly with the college`s summer school. Fifty participants selected for this program, included high school sophomores, juniors and seniors. The students came from rural South Carolina and adjoining states which, presently, have limited science and computer science facilities. The program focused on high ability minority students, with high potential for science engineering and mathematical careers. The major objective was to increase the pool of well qualified college entering minority students who would elect to go into science, engineering and mathematical careers. The Division of Natural Sciences and Mathematics and engineering at Claflin College received major benefits from this program as it helped them to expand the Departments of Chemistry, Engineering, Mathematics and Computer Science as a result of additional enrollment. It also established an expanded pool of well qualified minority science and mathematics graduates, which were recruited by the federal agencies and private corporations, visiting Claflin College Campus. Department of Energy`s relationship with Claflin College increased the public awareness of energy related job opportunities in the public and private sectors.

    2. Parallel programming with PCN. Revision 1

      SciTech Connect

      Foster, I.; Tuecke, S.

      1991-12-01

      PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

    3. Cheaper Adjoints by Reversing Address Computations

      DOE PAGES [OSTI]

      Hascoët, L.; Utke, J.; Naumann, U.

      2008-01-01

      The reverse mode of automatic differentiation is widely used in science and engineering. A severe bottleneck for the performance of the reverse mode, however, is the necessity to recover certain intermediate values of the program in reverse order. Among these values are computed addresses, which traditionally are recovered through forward recomputation and storage in memory. We propose an alternative approach for recovery that uses inverse computation based on dependency information. Address storage constitutes a significant portion of the overall storage requirements. An example illustrates substantial gains that the proposed approach yields, and we show use cases in practical applications.

    4. Certification of computer professionals: A good idea?

      SciTech Connect

      Boggess, G.

      1994-12-31

      In the early stages of computing there was little understanding or attention paid to the ethical responsibilities of professionals. Compainies routinely put secretaries and music majors through 30 hours of video training and turned them loose on data processing projects. As the nature of the computing task changed, these same practices were followed and the trainees were set loose on life-critical software development projects. The enormous risks of using programmers with limited training has been by the GAO report on the BSY-2 program.

    5. Indirection and computer security.

      SciTech Connect

      Berg, Michael J.

      2011-09-01

      The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyze common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions.

    6. Program Year 2008 State Energy Program Formula

      Energy.gov [DOE]

      U.S. Department of Energy (DOE) State Energy Program (SEP), SEP Program Guidance Fiscal Year 2008, Program Year 2008, energy efficiency and renewable energy programs in the states, DOE Office of Energy Efficiency and Renewable Energy

    7. PROGRAM ABSTRACTS

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      & DEVELOPMENT: PROGRAM ABSTRACTS Energy Efficiency and Renewable Energy Office of Transportation Technologies Office of Advanced Automotive Technologies Catalyst Layer Bipolar Plate Electrode Backing Layers INTEGRATED SYSTEMS Polymer Electrolyte Membrane Fuel Cells Fuel Cell Stack PEM STACK & STACK COMPONENTS Fuel Cell Stack System Air Management System Fuel Processor System For Transportation June 1999 ENERGY EFFICIENCY AND RENEWABLE ENERGY OFFICE OF TRANSPORTATION TECHNOLOGIES OFFICE

    8. Energy Conservation Program for Consumer Products and Certain Commercial

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      and Industrial Equipment: Proposed Determination of Computer Servers as a Covered Consumer Product, EERE-2013-BT-DET-0034 | Department of Energy Program for Consumer Products and Certain Commercial and Industrial Equipment: Proposed Determination of Computer Servers as a Covered Consumer Product, EERE-2013-BT-DET-0034 Energy Conservation Program for Consumer Products and Certain Commercial and Industrial Equipment: Proposed Determination of Computer Servers as a Covered Consumer Product,

    9. Demonstration of the Scalability of Programming Environments By Simulating

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Multi--Scale Applications | Argonne Leadership Computing Facility Demonstration of the Scalability of Programming Environments By Simulating Multi--Scale Applications PI Name: Robert Voigt PI Email: rvoigt@krellinst.org Institution: Leidos Allocation Program: ALCC Allocation Hours at ALCF: 151 Million Year: 2016 Research Domain: Computer Science Future computing systems pose new challenges for current scientific software and applications. These systems will have extremely high node counts

    10. Sandia National Laboratories: Advanced Simulation and Computing: Contact

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      ASC Contact ASC Sandia ASC Program Contacts Program Director Bruce Hendrickson bahendr@sandia.gov Program Manager David Womble dewombl@sandia.gov Integrated Codes Lead Scott Hutchinson sahutch@sandia.gov Physics & Engineering Modeling Lead Jim Redmond jmredmo@sandia.gov Verification & Validation Lead Curt Nilsen canilse@sandia.gov Computational Systems & Software Engineering Lead Ken Alvin kfalvin@sandia.gov Facilities Operations & User Support Lead Tom Klitsner

    11. Foundational Tools for Petascale Computing

      SciTech Connect

      Miller, Barton

      2014-05-19

      The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “High-Performance Energy Applications and Systems”, SC0004061/FG02-10ER25972, UW PRJ36WV.

    12. Cloud Computing Services

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      and Application Center for Hydrogen Energy Research Programs ARPA-E Basic Energy Sciences ... Sea State Contour) Code Online Abstracts and Reports Water Power Personnel ...

    13. ASC Program Elements | National Nuclear Security Administration | (NNSA)

      National Nuclear Security Administration (NNSA)

      Computing ASC Program Elements Established in 1995, the Advanced Simulation and Computing (ASC) Program supports the Department of Energy's National Nuclear Security Administration (NNSA) Defense Programs' shift in emphasis from test-based confidence to simulation-based confidence. Under ASC, scientific simulation capabilities are developed to analyze and predict the performance, safety, and reliability of nuclear weapons and to certify their functionality. ASC integrates the work of three

    14. Computers as tools

      SciTech Connect

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    15. Energy Aware Computing

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX TOKIO: Total Knowledge of I/O Home » R & D » Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy required to compute is to reduce the power usage on a node. One way to accomplish this is by lowering the frequency at which the CPU operates. However, reducing the clock speed increases the time to solution, creating a potential tradeoff. NERSC continues to examine how such methods impact

    16. Computational Science and Innovation

      SciTech Connect

      Dean, David Jarvis

      2011-01-01

      Simulations - utilizing computers to solve complicated science and engineering problems - are a key ingredient of modern science. The U.S. Department of Energy (DOE) is a world leader in the development of high-performance computing (HPC), the development of applied math and algorithms that utilize the full potential of HPC platforms, and the application of computing to science and engineering problems. An interesting general question is whether the DOE can strategically utilize its capability in simulations to advance innovation more broadly. In this article, I will argue that this is certainly possible.

    17. Convergence: Computing and communications

      SciTech Connect

      Catlett, C.

      1996-12-31

      This paper highlights the operations of the National Center for Supercomputing Applications (NCSA). NCSA is developing and implementing a national strategy to create, use, and transfer advanced computing and communication tools and information technologies for science, engineering, education, and business. The primary focus of the presentation is historical and expected growth in the computing capacity, personal computer performance, and Internet and WorldWide Web sites. Data are presented to show changes over the past 10 to 20 years in these areas. 5 figs., 4 tabs.

    18. Computation Directorate 2007 Annual Report

      SciTech Connect

      Henson, V E; Guse, J A

      2008-03-06

      extremely intricate, detailed computational simulation that we can test our theories, and simulating weather and climate over the entire globe requires the most massive high-performance computers that exist. Such extreme problems are found in numerous laboratory missions, including astrophysics, weapons programs, materials science, and earth science.

    19. On Undecidability Aspects of Resilient Computations and Implications to Exascale

      SciTech Connect

      Rao, Nageswara S

      2014-01-01

      Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classes of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.

    20. Computer surety: computer system inspection guidance. [Contains glossary

      SciTech Connect

      Not Available

      1981-07-01

      This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

    1. Computing and Computational Sciences Directorate - National Center for

      U.S. Department of Energy (DOE) - all webpages

      Computational Sciences Search Go! ORNL * Find People * Contact * Site Index * Comments Home Divisions and Centers Computational Sciences and Engineering Computer Science and Mathematics Information Technology Joint Institute for Computational Sciences National Center for Computational Sciences Supercomputing Projects Awards Employment Opportunities Student Opportunities About Us Organization In the News Contact Us Visitor Information ORNL Research Areas Neutron Sciences Biological Systems

    2. Program Description

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Program Description The Los Alamos STEM Challenge gives your students a unique opportunity to envision the future years of discovery at LANL. Students will develop 21st century skills as they collaborate in teams to research LANL projects and propose innovative future projects. They apply creativity and critical thinking skills as they visualize their own ideas through posters, videos, apps or essays describing potential future projects at LANL. Students are encouraged to learn about the

    3. Programming models

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      NERSC-8 Procurement Programming models File Storage and I/O Edison PDSF Genepool Queues and Scheduling Retired Systems Storage & File Systems Application Performance Data & Analytics Job Logs & Statistics Training & Tutorials Software Policies User Surveys NERSC Users Group Help Staff Blogs Request Repository Mailing List Need Help? Out-of-hours Status and Password help Call operations: 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov

    4. Program Development

      SciTech Connect

      Atencio, Julian J.

      2014-05-01

      This presentation covers how to go about developing a human reliability program. In particular, it touches on conceptual thinking, raising awareness in an organization, the actions that go into developing a plan. It emphasizes evaluating all positions, eliminating positions from the pool due to mitigating factors, and keeping the process transparent. It lists components of the process and objectives in process development. It also touches on the role of leadership and the necessity for audit.

    5. DOE High Performance Computing for Manufacturing (HPC4Mfg) Program...

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      HPC systems, but also for experts in the use of these systems to solve complex problems." ... laboratories will play a key role in solving manufacturing challenges and ...

    6. An Information Dependant Computer Program for Engine Exhaust...

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      More Documents & Publications Modular Low Cost High Energy Exhaust Heat Thermoelectric Generator with Closed-Loop Exhaust By-Pass System Exhaust Heat Recovery for Rural Alaskan ...

    7. Data Intensive Computing Pilot Program 2012/2013 Awards

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      ... Use of the X-ray free-electron laser is necessary for this experiment since it allows to outrun the radiation damage that would normally accrue to the catalytic cluster at a ...

    8. AutoPIPE Extract Program

      Energy Science and Technology Software Center

      1993-07-02

      The AutoPIPE Extract Program (APEX) provides an interface between CADAM (Computer Aided Design and Manufacturing) Release 21 drafting software and the AutoPIPE, Version 4.4, piping analysis program. APEX produces the AutoPIPE batch input file that corresponds to the piping shown in a CADAM model. The card image file contains header cards, material cards, and pipe cross section cards as well as tee, bend, valve, and flange cards. Node numbers are automatically generated. APEX processes straightmore » pipe, branch lines and ring geometries.« less

    9. Maryland Efficiency Program Options

      Office of Energy Efficiency and Renewable Energy (EERE)

      Maryland Efficiency Program Options, from the Tool Kit Framework: Small Town University Energy Program (STEP).

    10. STEP Program Benchmark Report

      Energy.gov [DOE]

      STEP Program Benchmark Report, from the Tool Kit Framework: Small Town University Energy Program (STEP).

    11. Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      A Q&A with the ALCF's team lead for high-performance computing infrastructure. Read More Theta ESP hands-on workshop Early science teams prepare for Theta at hands-on workshop ...

    12. Quantum steady computation

      SciTech Connect

      Castagnoli, G. )

      1991-08-10

      This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

    13. Cloud computing security.

      SciTech Connect

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    14. Data and Networking | Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Data Tools & Models Glossary › FAQS › Overview Data Tools Time Series Models & Documentation EIA has a vast amount of data, reports, forecasts, analytical content, and documentation to assist researchers working on energy topics. For users eager to dive deeper into our content, we have assembled tools to customize searches, view specific data sets, study detailed documentation, and access time-series data. Application Programming Interface (API): The API allows computers to more

    15. Powering Research | Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      The form factor for the decay of a kaon into a pion and two leptons Lattice QCD Paul Mackenzie Allocation Program: INCITE Allocation Hours: 180 Million Breakthrough Science At the ALCF, we provide researchers from industry, academia, and government agencies with access to leadership-class supercomputing capabilities and a team of expert computational scientists. This unparalleled combination of resources is enabling breakthroughs in science and engineering that would otherwise be impossible.

    16. Compiling & Linking | Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      System Overview Data Storage & File Systems Compiling & Linking Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource.

    17. Computation supporting biodefense

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Conference on High-Speed Computing LANL / LLNL / SNL Salishan Lodge, Gleneden Beach, Oregon 24 April 2003 Murray Wolinsky murray@lanl.gov The Role of Computation in Biodefense 1. Biothreat 101 2. Bioinformatics 101 Examples 3. Sequence analysis: mpiBLAST Feng 4. Detection: KPATH Slezak 5. Protein structure: ROSETTA Strauss 6. Real-time epidemiology: EpiSIMS Eubank 7. Forensics: VESPA Myers, Korber 8. Needs System level analytical capabilities Enhanced phylogenetic algorithms Novel

    18. Computational Earth Science

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental health, cleaner energy, and national security. Contact Us Group Leader Carl Gable Deputy Group Leader Gilles Bussod Email Profile pages header Search our Profile pages Hari Viswanathan inspects a microfluidic cell used to study the extraction of hydrocarbon fuels from a complex fracture network. EES-16's Subsurface Flow

    19. Computational Modeling | Bioenergy | NREL

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Computational Modeling NREL uses computational modeling to increase the efficiency of biomass conversion by rational design using multiscale modeling, applying theoretical approaches, and testing scientific hypotheses. model of enzymes wrapping on cellulose; colorful circular structures entwined through blue strands Cellulosomes are complexes of protein scaffolds and enzymes that are highly effective in decomposing biomass. This is a snapshot of a coarse-grain model of complex cellulosome

    20. Computational Physics and Methods

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      ADTSC » CCS » CCS-2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy... ...hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence

    1. New TRACC Cluster Computer

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      TRACC Cluster Computer With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD 16 core, 2.3 GHz, 32 GB processors. See also Computing Resources.

    2. Applied Computer Science

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader David Daniel (Acting) Email Professional Assistant Erika Maestas 505-664-0673 Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a

    3. Theory, Simulation, and Computation

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      PADSTE » ADTSC Theory, Simulation, and Computation Supporting the Laboratory's overarching strategy to provide cutting-edge tools to guide and interpret experiments and further our fundamental understanding and predictive capabilities for complex systems. Theory, modeling, informatics Suites of experiment data High performance computing, simulation, visualization Contacts Associate Director John Sarrao Deputy Associate Director Paul Dotson Directorate Office (505) 667-6645 Email Applying the

    4. Compute Reservation Request Form

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Compute Reservation Request Form Compute Reservation Request Form Users can request a scheduled reservation of machine resources if their jobs have special needs that cannot be accommodated through the regular batch system. A reservation brings some portion of the machine to a specific user or project for an agreed upon duration. Typically this is used for interactive debugging at scale or real time processing linked to some experiment or event. It is not intended to be used to guarantee fast

    5. Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Computing Computing Fun fact: Most systems require air conditioning or chilled water to cool super powerful supercomputers, but the Olympus supercomputer at Pacific Northwest National Laboratory is cooled by the location's 65 degree groundwater. Traditional cooling systems could cost up to $61,000 in electricity each year, but this more efficient setup uses 70 percent less energy. | Photo courtesy of PNNL. Fun fact: Most systems require air conditioning or chilled water to cool super powerful

    6. Program Evaluation: Program Logic | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Program Logic Program Evaluation: Program Logic Step four will help you develop a logical model for your program (learn more about the other steps in general program evaluations): What is a Logic Model? Benefits of Using Logic Modeling Pitfalls and How to Avoid Them Steps to Developing a Logic Model What is a Logic Model? Logic modeling is a thought process program evaluators have found to be useful for at least forty years and has become increasingly popular with program managers during the

    7. Process for selecting NEAMS applications for access to Idaho National Laboratory high performance computing resources

      SciTech Connect

      Michael Pernice

      2010-09-01

      INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

    8. (U) Computation acceleration using dynamic memory

      SciTech Connect

      Hakel, Peter

      2014-10-24

      Many computational applications require the repeated use of quantities, whose calculations can be expensive. In order to speed up the overall execution of the program, it is often advantageous to replace computation with extra memory usage. In this approach, computed values are stored and then, when they are needed again, they are quickly retrieved from memory rather than being calculated again at great cost. Sometimes, however, the precise amount of memory needed to store such a collection is not known in advance, and only emerges in the course of running the calculation. One problem accompanying such a situation is wasted memory space in overdimensioned (and possibly sparse) arrays. Another issue is the overhead of copying existing values to a new, larger memory space, if the original allocation turns out to be insufficient. In order to handle these runtime problems, the programmer therefore has the extra task of addressing them in the code.

    9. Enabling Computational Technologies for Terascale Scientific Simulations

      SciTech Connect

      Ashby, S.F.

      2000-08-24

      We develop scalable algorithms and object-oriented code frameworks for terascale scientific simulations on massively parallel processors (MPPs). Our research in multigrid-based linear solvers and adaptive mesh refinement enables Laboratory programs to use MPPs to explore important physical phenomena. For example, our research aids stockpile stewardship by making practical detailed 3D simulations of radiation transport. The need to solve large linear systems arises in many applications, including radiation transport, structural dynamics, combustion, and flow in porous media. These systems result from discretizations of partial differential equations on computational meshes. Our first research objective is to develop multigrid preconditioned iterative methods for such problems and to demonstrate their scalability on MPPs. Scalability describes how total computational work grows with problem size; it measures how effectively additional resources can help solve increasingly larger problems. Many factors contribute to scalability: computer architecture, parallel implementation, and choice of algorithm. Scalable algorithms have been shown to decrease simulation times by several orders of magnitude.

    10. DHC: a diurnal heat capacity program for microcomputers

      SciTech Connect

      Balcomb, J.D.

      1985-01-01

      A computer program has been developed that can predict the temperature swing in direct gain passive solar buildings. The diurnal heat capacity (DHC) program calculates the DHC for any combination of homogeneous or layered surfaces using closed-form harmonic solutions to the heat diffusion equation. The theory is described, a Basic program listing is provided, and an example solution printout is given.

    11. Managing turbine-generator outages by computer

      SciTech Connect

      Reinhart, E.R. [Reinhart and Associates, Inc., Austin, TX (United States)

      1997-09-01

      This article describes software being developed to address the need for computerized planning and documentation programs that can help manage outages. Downsized power-utility companies and the growing demand for independent, competitive engineering and maintenance services have created a need for a computer-assisted planning and technical-direction program for turbine-generator outages. To meet this need, a software tool is now under development that can run on a desktop or laptop personal computer to assist utility personnel and technical directors in outage planning. Total Outage Planning Software (TOPS), which runs on Windows, takes advantage of the mass data storage available with compact-disc technology by archiving the complete outage documentation on CD. Previous outage records can then be indexed, searched, and viewed on a computer with the click of a mouse. Critical-path schedules, parts lists, parts order tracking, work instructions and procedures, custom data sheets, and progress reports can be generated by computer on-site during an outage.

    12. Intergovernmental Programs | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs The Office of Environmental Management supports, by means of grants and cooperative agreements, a number of

    13. in High Performance Computing Computer System, Cluster, and Networking...

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

    14. Programming Models

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Version Control Tools Programming Libraries Performance and Debugging Tools Grid Software and Services NERSC Software Downloads Software Page Template Policies User Surveys NERSC Users Group Help Staff Blogs Request Repository Mailing List Need Help? Out-of-hours Status and Password help Call operations: 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or 510-486-8612 Consulting and questions http://help.nersc.gov

    15. The HILDA program

      SciTech Connect

      Close, E.; Fong, C; Lee, E.

      1991-10-30

      Although this report is called a program document, it is not simply a user`s guide to running HILDA nor is it a programmer`s guide to maintaining and updating HILDA. It is a guide to HILDA as a program and as a model for designing and costing a heavy ion fusion (HIF) driver. HILDA represents the work and ideas of many people; as does the model upon which it is based. The project was initiated by Denis Keefe, the leader of the LBL HIFAR project. He suggested the name HILDA, which is an acronym for Heavy Ion Linac Driver Analysis. The conventions and style of development of the HILDA program are based on the original goals. It was desired to have a computer program that could estimate the cost and find an optimal design for Heavy Ion Fusion induction linac drivers. This program should model near-term machines as well as fullscale drivers. The code objectives were: (1) A relatively detailed, but easily understood model. (2) Modular, structured code to facilitate making changes in the model, the analysis reports, and the user interface. (3) Documentation that defines, and explains the system model, cost algorithm, program structure, and generated reports. With this tool a knowledgeable user would be able to examine an ensemble of drivers and find the driver that is minimum in cost, subject to stated constraints. This document contains a report section that describes how to use HILDA, some simple illustrative examples, and descriptions of the models used for the beam dynamics and component design. Associated with this document, as files on floppy disks, are the complete HILDA source code, much information that is needed to maintain and update HILDA, and some complete examples. These examples illustrate that the present version of HILDA can generate much useful information about the design of a HIF driver. They also serve as guides to what features would be useful to include in future updates. The HPD represents the current state of development of this project.

    16. Information Science, Computing, Applied Math

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Information Science, Computing, Applied Math Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los Alamos National Laboratory for the best of both. No place on Earth pursues a broader array of world-class scientific endeavors. Computer, Computational, and Statistical Sciences (CCS)» High Performance Computing (HPC)» Extreme Scale Computing, Co-design» supercomputing into the future Overview Los Alamos Asteroid Killer

    17. computers | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      computers NNSA Announces Procurement of Penguin Computing Clusters to Support Stockpile Stewardship at National Labs The National Nuclear Security Administration's (NNSA's) Lawrence Livermore National Laboratory today announced the awarding of a subcontract to Penguin Computing - a leading developer of high-performance Linux cluster computing systems based in Silicon Valley - to bolster computing for stockpile... Sandia donates 242 computers to northern California schools Sandia National

    18. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    19. 5 Checks & 5 Tips for INCITE | Argonne Leadership Computing Facility

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Early Science Program INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program ALCF Data Science Program INCITE 2016 Projects ALCC 2016-2017 Projects ADSP projects Theta ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations 5 Checks & 5 Tips for INCITE How do you know if INCITE is right for your research? Your answers to these five questions may help you decide. Follow these

    20. Beryllium Program - Hanford Site

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Program About Us About Hanford Cleanup Hanford History Hanford Site Wide Programs Beryllium Program Beryllium Program Points of Contact Beryllium Facilities & Areas Beryllium Program Information Hanford CBDPP Committee Beryllium FAQs Beryllium Related Links Hanford Beryllium Awareness Group (BAG) Program Performance Assessments Beryllium Program Feedback Beryllium Health Advocates Primary Contractors/Employers Medical Testing and Surveillance Facilities General Resources Contact Us Beryllium