National Library of Energy BETA

Sample records for including computer modeling

  1. Computation of Domain-Averaged Irradiance with a Simple Two-Stream Radiative Transfer Model Including Vertical Cloud Property Correlations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computation of Domain-Averaged Irradiance with a Simple Two-Stream Radiative Transfer Model Including Vertical Cloud Property Correlations S. Kato Center for Atmospheric Sciences Hampton University Hampton, Virginia Introduction Recent development of remote sensing instruments by Atmospheric Radiation Measurement (ARM?) Program provides information of spatial and temporal variability of cloud structures. However it is not clear what cloud properties are required to express complicated cloud

  2. Theory, Modeling and Computation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme...

  3. Magnetohydrodynamic Models of Accretion Including Radiation Transport |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Argonne Leadership Computing Facility Snapshot of the global structure of a radiation-dominated accretion flow around a black hole computed using the Athena++ code Snapshot of the global structure of a radiation-dominated accretion flow around a black hole computed using the Athena++ code. Left half of the image shows the density (in units of 0.01g/cm^3), and the right half shows the radiation energy density (in units of the energy density for a 10^7 degree black body). Coordinate axes are

  4. Human-computer interface including haptically controlled interactions

    DOE Patents [OSTI]

    Anderson, Thomas G.

    2005-10-11

    The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.

  5. LANL computer model boosts engine efficiency

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LANL computer model boosts engine efficiency LANL computer model boosts engine efficiency The KIVA model has been instrumental in helping researchers and manufacturers understand...

  6. Comparison of Joint Modeling Approaches Including Eulerian Sliding

    Office of Scientific and Technical Information (OSTI)

    Interfaces (Technical Report) | SciTech Connect Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces Citation Details In-Document Search Title: Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian

  7. Trends and challenges when including microstructure in materials modeling:

    Office of Scientific and Technical Information (OSTI)

    Examples of problems studied at Sandia National Laboratories. (Conference) | SciTech Connect Trends and challenges when including microstructure in materials modeling: Examples of problems studied at Sandia National Laboratories. Citation Details In-Document Search Title: Trends and challenges when including microstructure in materials modeling: Examples of problems studied at Sandia National Laboratories. Abstract not provided. Authors: Dingreville, Remi Philippe Michel Publication Date:

  8. Improved computer models support genetics research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    February Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and...

  9. A model for heterogeneous materials including phase transformations

    SciTech Connect (OSTI)

    Addessio, F.L.; Clements, B.E.; Williams, T.O.

    2005-04-15

    A model is developed for particulate composites, which includes phase transformations in one or all of the constituents. The model is an extension of the method of cells formalism. Representative simulations for a single-phase, brittle particulate (SiC) embedded in a ductile material (Ti), which undergoes a solid-solid phase transformation, are provided. Also, simulations for a tungsten heavy alloy (WHA) are included. In the WHA analyses a particulate composite, composed of tungsten particles embedded in a tungsten-iron-nickel alloy matrix, is modeled. A solid-liquid phase transformation of the matrix material is included in the WHA numerical calculations. The example problems also demonstrate two approaches for generating free energies for the material constituents. Simulations for volumetric compression, uniaxial strain, biaxial strain, and pure shear are used to demonstrate the versatility of the model.

  10. A coke oven model including thermal decomposition kinetics of tar

    SciTech Connect (OSTI)

    Munekane, Fuminori; Yamaguchi, Yukio; Tanioka, Seiichi

    1997-12-31

    A new one-dimensional coke oven model has been developed for simulating the amount and the characteristics of by-products such as tar and gas as well as coke. This model consists of both heat transfer and chemical kinetics including thermal decomposition of coal and tar. The chemical kinetics constants are obtained by estimation based on the results of experiments conducted to investigate the thermal decomposition of both coal and tar. The calculation results using the new model are in good agreement with experimental ones.

  11. Cupola Furnace Computer Process Model

    SciTech Connect (OSTI)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  12. System Advisor Model Includes Analysis of Hybrid CSP Option ...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    concepts related to power generation have been missing in the System Advisor Model (SAM). One such concept, until now, is a hybrid integrated solar combined-cycle (ISCC)...

  13. Comparison of Joint Modeling Approaches Including Eulerian Sliding...

    Office of Scientific and Technical Information (OSTI)

    faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the...

  14. Bayesian approaches for combining computational model output...

    Office of Scientific and Technical Information (OSTI)

    for combining computational model output and physical observations Authors: Higdon, David M 1 ; Lawrence, Earl 1 ; Heitmann, Katrin 2 ; Habib, Salman 2 + Show Author...

  15. Computable General Equilibrium Models for Sustainability Impact...

    Open Energy Info (EERE)

    Publications, Softwaremodeling tools User Interface: Other Website: iatools.jrc.ec.europa.eudocsecolecon2006.pdf Computable General Equilibrium Models for Sustainability...

  16. Section 23: Models and Computer Codes

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Application-2014 for the Waste Isolation Pilot Plant Models and Computer Codes (40 CFR 194.23) United States Department of Energy Waste Isolation Pilot Plant Carlsbad Field...

  17. Climate Modeling using High-Performance Computing

    SciTech Connect (OSTI)

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  18. Improved computer models support genetics research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    February » Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and computational methods provide insight into why genes are activated. February 8, 2013 When complete, these barriers will be a portion of the NMSSUP upgrade. This molecular structure depicts a yeast transfer ribonucleic acid (tRNA), which carries a single amino acid to the ribosome during protein construction. A combined

  19. Improved computer models support genetics research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and computational methods provide insight into why genes are activated. February 8, 2013 When complete, these barriers will be a portion of the NMSSUP upgrade. This molecular structure depicts a yeast transfer ribonucleic acid (tRNA), which carries a single amino acid to the ribosome during protein construction. A combined experimental and

  20. Low Mach Number Models in Computational Astrophysics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Ann Almgren Low Mach Number Models in Computational Astrophysics February 4, 2014 Ann Almgren. Berkeley Lab Downloads Almgren-nug2014.pdf | Adobe Acrobat PDF file Low Mach Number Models in Computational Astrophysics - Ann Almgren, Berkeley Lab Last edited: 2016-02-01 08:06:52

  1. LANL computer model boosts engine efficiency

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LANL computer model boosts engine efficiency LANL computer model boosts engine efficiency The KIVA model has been instrumental in helping researchers and manufacturers understand combustion processes, accelerate engine development and improve engine design and efficiency. September 25, 2012 KIVA simulation of an experimental engine with DOHC quasi-symmetric pent-roof combustion chamber and 4 valves. KIVA simulation of an experimental engine with DOHC quasi-symmetric pent-roof combustion chamber

  2. CUPOLA FURNACE COMPUTER PROCESS MODEL

    Office of Scientific and Technical Information (OSTI)

    ... p 809 (1995) 25. Clark D., Moore K., Stanek V., Katz S.: Neural network ... E. D., Clark D. E., Moore K. L.: AFS cupola model verification - initial investigations. ...

  3. HIV virus spread and evolution studied through computer modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    HIV and evolution studied through computer modeling HIV virus spread and evolution studied through computer modeling This approach distinguishes between susceptible and infected individuals to capture the full infection history, including contact tracing data for infected individuals. November 19, 2013 Scanning electron micrograph of HIV-1 budding (in green) from cultured lymphocytes. The image has been colored to highlight important features. Scanning electron micrograph of HIV-1 budding (in

  4. Computer Model Buildings Contaminated with Radioactive Material

    Energy Science and Technology Software Center (OSTI)

    1998-05-19

    The RESRAD-BUILD computer code is a pathway analysis model designed to evaluate the potential radiological dose incurred by an individual who works or lives in a building contaminated with radioactive material.

  5. Significant Enhancement of Computational Efficiency in Nonlinear Multiscale Battery Model for Computer Aided Engineering

    SciTech Connect (OSTI)

    Smith, Kandler; Graf, Peter; Jun, Myungsoo; Yang, Chuanbo; Li, Genong; Li, Shaoping; Hochman, Amit; Tselepidakis, Dimitrios

    2015-06-09

    This presentation provides an update on improvements in computational efficiency in a nonlinear multiscale battery model for computer aided engineering.

  6. RELAP5-3D Code Includes Athena Features and Models

    SciTech Connect (OSTI)

    Richard A. Riemke; Cliff B. Davis; Richard R. Schultz

    2006-07-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, sf6, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5- 3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper.

  7. Low Mach Number Models in Computational Astrophysics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    In memoriam: Michael Welcome 1957 - 2014 RIP Almgren CCSE Low Mach Number Models in Computational Astrophysics Ann Almgren Center for Computational Sciences and Engineering Lawrence Berkeley National Laboratory NUG 2014: NERSC@40 February 4, 2014 Collaborators: John Bell, Chris Malone, Andy Nonaka, Stan Woosley, Michael Zingale Almgren CCSE Introduction We often associate astrophysics with explosive phenomena: novae supernovae gamma-ray bursts X-ray bursts Type Ia Supernovae Largest

  8. MaRIE theory, modeling and computation roadmap executive summary...

    Office of Scientific and Technical Information (OSTI)

    Conference: MaRIE theory, modeling and computation roadmap executive summary Citation Details In-Document Search Title: MaRIE theory, modeling and computation roadmap executive ...

  9. Computational Fluid Dynamics Modeling of Diesel Engine Combustion...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Computational Fluid Dynamics Modeling of Diesel Engine Combustion and Emissions Computational Fluid Dynamics Modeling of Diesel Engine Combustion and Emissions 2005 Diesel Engine...

  10. Computational flow modeling of a simplified integrated tractor...

    Office of Scientific and Technical Information (OSTI)

    Computational flow modeling of a simplified integrated tractor-trailer geometry. Citation Details In-Document Search Title: Computational flow modeling of a simplified integrated...

  11. Computer modeling of the global warming effect

    SciTech Connect (OSTI)

    Washington, W.M.

    1993-12-31

    The state of knowledge of global warming will be presented and two aspects examined: observational evidence and a review of the state of computer modeling of climate change due to anthropogenic increases in greenhouse gases. Observational evidence, indeed, shows global warming, but it is difficult to prove that the changes are unequivocally due to the greenhouse-gas effect. Although observational measurements of global warming are subject to ``correction,`` researchers are showing consistent patterns in their interpretation of the data. Since the 1960s, climate scientists have been making their computer models of the climate system more realistic. Models started as atmospheric models and, through the addition of oceans, surface hydrology, and sea-ice components, they then became climate-system models. Because of computer limitations and the limited understanding of the degree of interaction of the various components, present models require substantial simplification. Nevertheless, in their present state of development climate models can reproduce most of the observed large-scale features of the real system, such as wind, temperature, precipitation, ocean current, and sea-ice distribution. The use of supercomputers to advance the spatial resolution and realism of earth-system models will also be discussed.

  12. MHK Reference Model: Relevance to Computer Simulation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Diana Bull Sandia National Laboratories July 9 th , 2012 SAND Number: 2012-5508P MHK Reference Model: Relevance to Computer Simulation Reference Model Partners Oregon State University /NNMREC University of Washington St. Anthony Falls Laboratory-UMinn Florida Atlantic University / SNMREC Cardinal Engineering WEC Design Operational Waves Profile Design of WEC--Performance Structural Design of WEC PTO Design Survival Waves Structural Design of WEC--Survivability Brake Design Anchor and Mooring

  13. District-heating strategy model: computer programmer's manual

    SciTech Connect (OSTI)

    Kuzanek, J.F.

    1982-05-01

    The US Department of Housing and Urban Development (HUD) and the US Department of Energy (DOE) cosponsor a program aimed at increasing the number of district heating and cooling (DHC) systems. Such systems can reduce the amount and costs of fuels used to heat and cool buildings in a district. Twenty-eight communities have agreed to aid HUD in a national feasibility assessment of DHC systems. The HUD/DOE program entails technical assistance by Argonne National Laboratory and Oak Ridge National Laboratory. The assistance includes a computer program, called the district heating strategy model (DHSM), that performs preliminary calculations to analyze potential DHC systems. This report describes the general capabilities of the DHSM, provides historical background on its development, and explains the computer installation and operation of the model - including the data file structures and the options. Sample problems illustrate the structure of the various input data files, the interactive computer-output listings. The report is written primarily for computer programmers responsible for installing the model on their computer systems, entering data, running the model, and implementing local modifications to the code.

  14. Significant Enhancement of Computational Efficiency in Nonlinear Multiscale Battery Model for Computer Aided Engineering (Presentation)

    SciTech Connect (OSTI)

    Kim, G.; Pesaran, A.; Smith, K.; Graf, P.; Jun, M.; Yang, C.; Li, G.; Li, S.; Hochman, A.; Tselepidakis, D.; White, J.

    2014-06-01

    This presentation discusses the significant enhancement of computational efficiency in nonlinear multiscale battery model for computer aided engineering in current research at NREL.

  15. Wild Fire Computer Model Helps Firefighters

    ScienceCinema (OSTI)

    Canfield, Jesse

    2014-06-02

    A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

  16. COMPUTATIONAL MODELING OF CIRCULATING FLUIDIZED BED REACTORS

    SciTech Connect (OSTI)

    Ibrahim, Essam A

    2013-01-09

    Details of numerical simulations of two-phase gas-solid turbulent flow in the riser section of Circulating Fluidized Bed Reactor (CFBR) using Computational Fluid Dynamics (CFD) technique are reported. Two CFBR riser configurations are considered and modeled. Each of these two riser models consist of inlet, exit, connecting elbows and a main pipe. Both riser configurations are cylindrical and have the same diameter but differ in their inlet lengths and main pipe height to enable investigation of riser geometrical scaling effects. In addition, two types of solid particles are exploited in the solid phase of the two-phase gas-solid riser flow simulations to study the influence of solid loading ratio on flow patterns. The gaseous phase in the two-phase flow is represented by standard atmospheric air. The CFD-based FLUENT software is employed to obtain steady state and transient solutions for flow modulations in the riser. The physical dimensions, types and numbers of computation meshes, and solution methodology utilized in the present work are stated. Flow parameters, such as static and dynamic pressure, species velocity, and volume fractions are monitored and analyzed. The differences in the computational results between the two models, under steady and transient conditions, are compared, contrasted, and discussed.

  17. Review of the synergies between computational modeling and experimenta...

    Office of Scientific and Technical Information (OSTI)

    Accepted Manuscript: Review of the synergies between computational modeling and ... November 16, 2016 Prev Next Title: Review of the synergies between computational ...

  18. Unitarity bounds in the Higgs model including triplet fields with custodial

    Office of Scientific and Technical Information (OSTI)

    symmetry (Journal Article) | SciTech Connect Unitarity bounds in the Higgs model including triplet fields with custodial symmetry Citation Details In-Document Search Title: Unitarity bounds in the Higgs model including triplet fields with custodial symmetry We study bounds on Higgs-boson masses from perturbative unitarity in the Georgi-Machacek model, whose Higgs sector is composed of a scalar isospin doublet and a real and a complex isospin triplet field. This model can be compatible with

  19. Computational Science Research in Support of Petascale Electromagnetic Modeling

    SciTech Connect (OSTI)

    Lee, L.-Q.; Akcelik, V; Ge, L; Chen, S; Schussman, G; Candel, A; Li, Z; Xiao, L; Kabel, A; Uplenchwar, R; Ng, C; Ko, K; /SLAC

    2008-06-20

    Computational science research components were vital parts of the SciDAC-1 accelerator project and are continuing to play a critical role in newly-funded SciDAC-2 accelerator project, the Community Petascale Project for Accelerator Science and Simulation (ComPASS). Recent advances and achievements in the area of computational science research in support of petascale electromagnetic modeling for accelerator design analysis are presented, which include shape determination of superconducting RF cavities, mesh-based multilevel preconditioner in solving highly-indefinite linear systems, moving window using h- or p- refinement for time-domain short-range wakefield calculations, and improved scalable application I/O.

  20. Fast, narrow-band computer model for radiation calculations

    SciTech Connect (OSTI)

    Yan, Z.; Holmstedt, G.

    1997-01-01

    A fast, narrow-band computer model, FASTNB, which predicts the radiation intensity in a general nonisothermal and nonhomogeneous combustion environment, has been developed. The spectral absorption coefficients of the combustion products, including carbon dioxide, water vapor, and soot, are calculated based on the narrow-band model. FASTNB provides an accurate calculation at reasonably high speed. Compared with Grosshandler`s narrow-band model, RADCAL, which has been verified quite extensively against experimental measurements, FASTNB is more than 20 times faster and gives almost exactly the same results.

  1. Modeling-Computer Simulations At Northern Basin & Range Region...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Northern Basin & Range Region (Pritchett, 2004) Exploration Activity...

  2. Modeling-Computer Simulations At Central Nevada Seismic Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Central Nevada Seismic Zone Region (Pritchett, 2004) Exploration...

  3. Modeling-Computer Simulations At Geysers Area (Goff & Decker...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Geysers Area (Goff & Decker, 1983) Exploration Activity Details...

  4. Modeling-Computer Simulations At Dixie Valley Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Dixie Valley Geothermal Area (Wisian & Blackwell, 2004) Exploration...

  5. Modeling-Computer Simulations At Raft River Geothermal Area ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Raft River Geothermal Area (1980) Exploration Activity Details...

  6. Modeling-Computer Simulations (Lewicki & Oldenburg, 2004) | Open...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations (Lewicki & Oldenburg, 2004) Exploration Activity Details Location...

  7. Modeling-Computer Simulations At Desert Peak Area (Wisian & Blackwell...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Desert Peak Area (Wisian & Blackwell, 2004) Exploration Activity...

  8. Modeling-Computer Simulations (Combs, Et Al., 1999) | Open Energy...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations (Combs, Et Al., 1999) Exploration Activity Details Location Unspecified...

  9. Modeling-Computer Simulations At Yellowstone Region (Laney, 2005...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Yellowstone Region (Laney, 2005) Exploration Activity Details Location...

  10. Modeling-Computer Simulations At Raft River Geothermal Area ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Raft River Geothermal Area (1979) Exploration Activity Details...

  11. Modeling-Computer Simulations At Raft River Geothermal Area ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Raft River Geothermal Area (1977) Exploration Activity Details...

  12. Modeling-Computer Simulations (Ozkocak, 1985) | Open Energy Informatio...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations (Ozkocak, 1985) Exploration Activity Details Location Unspecified...

  13. Modeling-Computer Simulations At White Mountains Area (Goff ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At White Mountains Area (Goff & Decker, 1983) Exploration Activity...

  14. Modeling-Computer Simulations At Stillwater Area (Wisian & Blackwell...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Stillwater Area (Wisian & Blackwell, 2004) Exploration Activity...

  15. Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal Area (Wilt & Haar, 1986)...

  16. Modeling-Computer Simulations At Dixie Valley Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Dixie Valley Geothermal Area (Kennedy & Soest, 2006) Exploration...

  17. Modeling-Computer Simulations (Ranalli & Rybach, 2005) | Open...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations (Ranalli & Rybach, 2005) Exploration Activity Details Location...

  18. Modeling-Computer Simulations At Raft River Geothermal Area ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Raft River Geothermal Area (1983) Exploration Activity Details...

  19. Preliminary Phase Field Computational Model Development

    SciTech Connect (OSTI)

    Li, Yulan; Hu, Shenyang Y.; Xu, Ke; Suter, Jonathan D.; McCloy, John S.; Johnson, Bradley R.; Ramuhalli, Pradeep

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in experiments, special experimental methods were devised to create similar boundary conditions in the iron films. Preliminary MFM studies conducted on single and polycrystalline iron films with small sub-areas created with focused ion beam have correlated quite well qualitatively with phase-field simulations. However, phase-field model dimensions are still small relative to experiments thus far. We are in the process of increasing the size of the models and decreasing specimen size so both have identical dimensions. Ongoing research is focused on validation of the phase-field model. Validation is being accomplished through comparison with experimentally obtained MFM images (in progress), and planned measurements of major hysteresis loops and first order reversal curves. Extrapolation of simulation sizes to represent a more stochastic bulk-like system will require sampling of various simulations (i.e., with single non-magnetic defect, single magnetic defect, single grain boundary, single dislocation, etc.) with distributions of input parameters. These outputs can then be compared to laboratory magnetic measurements and ultimately to simulate magnetic Barkhausen noise signals.

  20. Wind energy conversion system analysis model (WECSAM) computer program documentation

    SciTech Connect (OSTI)

    Downey, W T; Hendrick, P L

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation. Thus, any user-supplied data for WECS performance, application load, utility rates, or wind resource may be entered into the scratch file to override the default data-base value. After the model and the inputs required from the user and derived from the data base are described, the model output and the various output options that can be exercised by the user are detailed. The general operation is set forth and suggestions are made for efficient modes of operation. Sample listings of various input, output, and data-base files are appended. (LEW)

  1. Computational model of miniature pulsating heat pipes.

    SciTech Connect (OSTI)

    Martinez, Mario J.; Givler, Richard C.

    2013-01-01

    The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative three-year project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal ground-plane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.

  2. Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Analysis Resources | Argonne National Laboratory Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer Modeling and Analysis Resources A private industry success story. PDF icon cat_cummins_computing_success_story_dec_

  3. MaRIE theory, modeling and computation roadmap executive summary

    Office of Scientific and Technical Information (OSTI)

    (Conference) | SciTech Connect Conference: MaRIE theory, modeling and computation roadmap executive summary Citation Details In-Document Search Title: MaRIE theory, modeling and computation roadmap executive summary The confluence of MaRIE (Matter-Radiation Interactions in Extreme) and extreme (exascale) computing timelines offers a unique opportunity in co-designing the elements of materials discovery, with theory and high performance computing, itself co-designed by constrained

  4. Predictive Capability Maturity Model for computational modeling and simulation.

    SciTech Connect (OSTI)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  5. Review of computational thermal-hydraulic modeling

    SciTech Connect (OSTI)

    Keefer, R.H.; Keeton, L.W.

    1995-12-31

    Corrosion of heat transfer tubing in nuclear steam generators has been a persistent problem in the power generation industry, assuming many different forms over the years depending on chemistry and operating conditions. Whatever the corrosion mechanism, a fundamental understanding of the process is essential to establish effective management strategies. To gain this fundamental understanding requires an integrated investigative approach that merges technology from many diverse scientific disciplines. An important aspect of an integrated approach is characterization of the corrosive environment at high temperature. This begins with a thorough understanding of local thermal-hydraulic conditions, since they affect deposit formation, chemical concentration, and ultimately corrosion. Computational Fluid Dynamics (CFD) can and should play an important role in characterizing the thermal-hydraulic environment and in predicting the consequences of that environment,. The evolution of CFD technology now allows accurate calculation of steam generator thermal-hydraulic conditions and the resulting sludge deposit profiles. Similar calculations are also possible for model boilers, so that tests can be designed to be prototypic of the heat exchanger environment they are supposed to simulate. This paper illustrates the utility of CFD technology by way of examples in each of these two areas. This technology can be further extended to produce more detailed local calculations of the chemical environment in support plate crevices, beneath thick deposits on tubes, and deep in tubesheet sludge piles. Knowledge of this local chemical environment will provide the foundation for development of mechanistic corrosion models, which can be used to optimize inspection and cleaning schedules and focus the search for a viable fix.

  6. Modeling of Geothermal Reservoirs: Fundamental Processes, Computer...

    Open Energy Info (EERE)

    of Geothermal Reservoirs: Fundamental Processes, Computer Simulation and Field Applications Jump to: navigation, search OpenEI Reference LibraryAdd to library Journal Article:...

  7. Modeling-Computer Simulations At Fish Lake Valley Area (Deymonaz...

    Open Energy Info (EERE)

    Fish Lake Valley Area (Deymonaz, Et Al., 2008) Jump to: navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Fish Lake Valley...

  8. Martin Karplus and Computer Modeling for Chemical Systems

    Office of Scientific and Technical Information (OSTI)

    Information Additional information about Martin Karplus, computer modeling, and chemical systems is available in electronic documents and on the Web. Documents: Comparison of 3D...

  9. Modeling-Computer Simulations At Nevada Test And Training Range...

    Open Energy Info (EERE)

    Nevada Test And Training Range Area (Sabin, Et Al., 2004) Jump to: navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Nevada...

  10. New partnership uses advanced computer science modeling to address...

    National Nuclear Security Administration (NNSA)

    partnership uses advanced computer science modeling to address climate change | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing...

  11. Modeling-Computer Simulations At Dixie Valley Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Dixie Valley Geothermal Area (Wannamaker, Et Al., 2006) Exploration...

  12. Modeling-Computer Simulations At Obsidian Cliff Area (Hulen,...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Obsidian Cliff Area (Hulen, Et Al., 2003) Exploration Activity Details...

  13. Modeling-Computer Simulations At Walker-Lane Transitional Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Walker-Lane Transitional Zone Region (Laney, 2005) Exploration...

  14. Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal Area (Roberts, Et Al., 1995)...

  15. Modeling-Computer Simulations At Long Valley Caldera Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Pribnow, Et Al., 2003)...

  16. Modeling-Computer Simulations At Hawthorne Area (Lazaro, Et Al...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Hawthorne Area (Lazaro, Et Al., 2010) Exploration Activity Details...

  17. Modeling-Computer Simulations At Walker-Lane Transitional Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Walker-Lane Transitional Zone Region (Pritchett, 2004) Exploration...

  18. Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area (Brown & DuTeaux, 1997) Exploration...

  19. Modeling-Computer Simulations At Coso Geothermal Area (1980)...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Coso Geothermal Area (1980) Exploration Activity Details Location Coso...

  20. Modeling-Computer Simulations At Long Valley Caldera Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Newman, Et Al., 2006) Exploration...

  1. Scientists use world's fastest computer to model materials under...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Materials under extreme conditions Scientists use world's fastest computer to model materials under extreme conditions Materials scientists are for the first time attempting to...

  2. Modeling-Computer Simulations At The Needles Area (Bell & Ramelli...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At The Needles Area (Bell & Ramelli, 2009) Exploration Activity Details...

  3. Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area (Goff & Decker, 1983) Exploration...

  4. Modeling-Computer Simulations At Long Valley Caldera Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Farrar, Et Al., 2003) Exploration...

  5. Modeling-Computer Simulations At Central Nevada Seismic Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Central Nevada Seismic Zone Region (Biasi, Et Al., 2009) Exploration...

  6. Modeling-Computer Simulations At Valles Caldera - Sulphur Springs...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Sulphur Springs Geothermal Area (Roberts, Et Al.,...

  7. Modeling-Computer Simulations At Nw Basin & Range Region (Pritchett...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Nw Basin & Range Region (Pritchett, 2004) Exploration Activity Details...

  8. Modeling-Computer Simulations At Long Valley Caldera Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Tempel, Et Al., 2011) Exploration...

  9. Modeling-Computer Simulations At Nw Basin & Range Region (Biasi...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Nw Basin & Range Region (Biasi, Et Al., 2009) Exploration Activity...

  10. Modeling-Computer Simulations At Coso Geothermal Area (2000)...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Coso Geothermal Area (2000) Exploration Activity Details Location Coso...

  11. Modeling-Computer Simulations At Northern Basin & Range Region...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Northern Basin & Range Region (Biasi, Et Al., 2009) Exploration...

  12. Modeling-Computer Simulations At Valles Caldera - Sulphur Springs...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Sulphur Springs Geothermal Area (Wilt & Haar, 1986)...

  13. Computer Modeling of Chemical and Geochemical Processes in High...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer modeling of chemical and geochemical processes in high ionic strength solutions is a unique capability within Sandia's Defense Waste Managment Programs located in...

  14. Modeling-Computer Simulations At Akutan Fumaroles Area (Kolker...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Akutan Fumaroles Area (Kolker, Et Al., 2010) Exploration Activity...

  15. Modeling-Computer Simulations At Walker-Lane Transitional Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Walker-Lane Transitional Zone Region (Biasi, Et Al., 2009) Exploration...

  16. Modeling-Computer Simulations At Coso Geothermal Area (1999)...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Coso Geothermal Area (1999) Exploration Activity Details Location Coso...

  17. Unsolicited Projects in 2012: Research in Computer Architecture, Modeling,

    Office of Science (SC) Website

    and Evolving MPI for Exascale | U.S. DOE Office of Science (SC) 2: Research in Computer Architecture, Modeling, and Evolving MPI for Exascale Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities

  18. Performance Modeling for 3D Visualization in a Heterogeneous Computing

    Office of Scientific and Technical Information (OSTI)

    Environment (Technical Report) | SciTech Connect Performance Modeling for 3D Visualization in a Heterogeneous Computing Environment Citation Details In-Document Search Title: Performance Modeling for 3D Visualization in a Heterogeneous Computing Environment The visualization of large, remotely located data sets necessitates the development of a distributed computing pipeline in order to reduce the data, in stages, to a manageable size. The required baseline infrastructure for launching such

  19. Ambient temperature modelling with soft computing techniques

    SciTech Connect (OSTI)

    Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni; De Felice, Matteo

    2010-07-15

    This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)

  20. Computational Fluid Dynamics Modeling of Diesel Engine Combustion and

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Emissions | Department of Energy Computational Fluid Dynamics Modeling of Diesel Engine Combustion and Emissions Computational Fluid Dynamics Modeling of Diesel Engine Combustion and Emissions 2005 Diesel Engine Emissions Reduction (DEER) Conference Presentations and Posters PDF icon 2005_deer_reitz.pdf More Documents & Publications Experiments and Modeling of Two-Stage Combustion in Low-Emissions Diesel Engines Comparison of Conventional Diesel and Reactivity Controlled Compression

  1. Modeling-Computer Simulations | Open Energy Information

    Open Energy Info (EERE)

    the risk of inaccurate predictions.1 Potential Pitfalls Uncertainties in initial reservoir conditions and other model inputs can cause inaccuracies in simulations, which...

  2. Scientists model brain structure to help computers recognize...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Do you see what I see? Scientists model brain structure to help computers recognize ... Introspectively, we know that the human brain solves this problem very well. We only have ...

  3. Computational Modeling for the American Chemical Society | GE...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computational Modeling for the American Chemical Society Click to email this to a friend (Opens in new window) Share on Facebook (Opens in new window) Click to share (Opens in new...

  4. Hierarchical calibration of computer models (Conference) | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    Hierarchical calibration of computer models Citation Details In-Document Search Title: Hierarchical calibration of computer models × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy science and technology. A paper copy of this document is also available for sale to the public

  5. Towards a Computational Model of a Methane Producing Archaeum (Journal

    Office of Scientific and Technical Information (OSTI)

    Article) | SciTech Connect SciTech Connect Search Results Journal Article: Towards a Computational Model of a Methane Producing Archaeum Citation Details In-Document Search Title: Towards a Computational Model of a Methane Producing Archaeum Authors: Peterson, Joseph R. ; Labhsetwar, Piyush Search SciTech Connect for author "Labhsetwar, Piyush" Search SciTech Connect for ORCID "0000000159333609" Search orcid.org for ORCID "0000000159333609" ; Ellermeier, Jeremy

  6. Bayesian approaches for combining computational model output and physical

    Office of Scientific and Technical Information (OSTI)

    observations (Conference) | SciTech Connect Bayesian approaches for combining computational model output and physical observations Citation Details In-Document Search Title: Bayesian approaches for combining computational model output and physical observations Authors: Higdon, David M [1] ; Lawrence, Earl [1] ; Heitmann, Katrin [2] ; Habib, Salman [2] + Show Author Affiliations Los Alamos National Laboratory ANL Publication Date: 2011-07-25 OSTI Identifier: 1084581 Report Number(s):

  7. Cielo Computational Environment Usage Model With Mappings to ACE

    Office of Scientific and Technical Information (OSTI)

    Requirements for the General Availability User Environment Capabilities Release Version 1.1 (Technical Report) | SciTech Connect Technical Report: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1 Citation Details In-Document Search Title: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release

  8. Cielo Computational Environment Usage Model With Mappings to ACE

    Office of Scientific and Technical Information (OSTI)

    Requirements for the General Availability User Environment Capabilities Release Version 1.1 (Technical Report) | SciTech Connect Technical Report: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1 Citation Details In-Document Search Title: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release

  9. Computer modeling reveals how surprisingly potent hepatitis C drug works

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Hepatitis C computer modeling Computer modeling reveals how surprisingly potent hepatitis C drug works A study reveals how daclatasvir targets one of its proteins and causes the fastest viral decline ever seen with anti-HCV drugs - within 12 hours of treatment. February 19, 2013 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy

  10. Computationally Efficient Modeling of High-Efficiency Clean Combustion

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Engines | Department of Energy 2 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting PDF icon ace012_flowers_2012_o.pdf More Documents & Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines Simulation of High Efficiency Clean Combustion Engines and Detailed Chemical Kinetic Mechanisms Development

  11. Use Computational Model to Design and Optimize Welding Conditions to

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Suppress Helium Cracking during Welding | Department of Energy Use Computational Model to Design and Optimize Welding Conditions to Suppress Helium Cracking during Welding Use Computational Model to Design and Optimize Welding Conditions to Suppress Helium Cracking during Welding Today, welding is widely used for repair, maintenance and upgrade of nuclear reactor components. As a critical technology to extend the service life of nuclear power plants beyond 60 years, weld technology must be

  12. Review of the synergies between computational modeling and experimental

    Office of Scientific and Technical Information (OSTI)

    characterization of materials across length scales (Journal Article) | DOE PAGES Accepted Manuscript: Review of the synergies between computational modeling and experimental characterization of materials across length scales This content will become publicly available on November 16, 2016 « Prev Next » Title: Review of the synergies between computational modeling and experimental characterization of materials across length scales With the increasing interplay between experimental and

  13. Towards a Computational Model of a Methane Producing Archaeum (Journal

    Office of Scientific and Technical Information (OSTI)

    Article) | DOE PAGES Towards a Computational Model of a Methane Producing Archaeum Title: Towards a Computational Model of a Methane Producing Archaeum Authors: Peterson, Joseph R. ; Labhsetwar, Piyush Search DOE PAGES for author "Labhsetwar, Piyush" Search DOE PAGES for ORCID "0000000159333609" Search orcid.org for ORCID "0000000159333609" ; Ellermeier, Jeremy R. ; Kohler, Petra R. A. ; Jain, Ankur Search DOE PAGES for author "Jain, Ankur" Search DOE

  14. A system analysis computer model for the High Flux Isotope Reactor (HFIRSYS Version 1)

    SciTech Connect (OSTI)

    Sozer, M.C.

    1992-04-01

    A system transient analysis computer model (HFIRSYS) has been developed for analysis of small break loss of coolant accidents (LOCA) and operational transients. The computer model is based on the Advanced Continuous Simulation Language (ACSL) that produces the FORTRAN code automatically and that provides integration routines such as the Gear`s stiff algorithm as well as enabling users with numerous practical tools for generating Eigen values, and providing debug outputs and graphics capabilities, etc. The HFIRSYS computer code is structured in the form of the Modular Modeling System (MMS) code. Component modules from MMS and in-house developed modules were both used to configure HFIRSYS. A description of the High Flux Isotope Reactor, theoretical bases for the modeled components of the system, and the verification and validation efforts are reported. The computer model performs satisfactorily including cases in which effects of structural elasticity on the system pressure is significant; however, its capabilities are limited to single phase flow. Because of the modular structure, the new component models from the Modular Modeling System can easily be added to HFIRSYS for analyzing their effects on system`s behavior. The computer model is a versatile tool for studying various system transients. The intent of this report is not to be a users manual, but to provide theoretical bases and basic information about the computer model and the reactor.

  15. New Computer Model Pinpoints Prime Materials for Carbon Capture

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    New Computer Model Pinpoints Prime Materials for Carbon Capture New Computer Model Pinpoints Prime Materials for Carbon Capture July 17, 2012 NERSC Contact: Linda Vu, lvu@lbl.gov, +1 510 495 2402 UC Berkeley Contact: Robert Sanders, rsanders@berkeley.edu zeolite350.jpg One of the 50 best zeolite structures for capturing carbon dioxide. Zeolite is a porous solid made of silicon dioxide, or quartz. In the model, the red balls are oxygen, the tan balls are silicon. The blue-green area is where

  16. Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect (OSTI)

    Weinan E

    2012-03-29

    The main bottleneck in modeling transport in molecular devices is to develop the correct formulation of the problem and efficient algorithms for analyzing the electronic structure and dynamics using, for example, the time-dependent density functional theory. We have divided this task into several steps. The first step is to developing the right mathematical formulation and numerical algorithms for analyzing the electronic structure using density functional theory. The second step is to study time-dependent density functional theory, particularly the far-field boundary conditions. The third step is to study electronic transport in molecular devices. We are now at the end of the first step. Under DOE support, we have made subtantial progress in developing linear scaling and sub-linear scaling algorithms for electronic structure analysis. Although there has been a huge amount of effort in the past on developing linear scaling algorithms, most of the algorithms developed suffer from the lack of robustness and controllable accuracy. We have made the following progress: (1) We have analyzed thoroughly the localization properties of the wave-functions. We have developed a clear understanding of the physical as well as mathematical origin of the decay properties. One important conclusion is that even for metals, one can choose wavefunctions that decay faster than any algebraic power. (2) We have developed algorithms that make use of these localization properties. Our algorithms are based on non-orthogonal formulations of the density functional theory. Our key contribution is to add a localization step into the algorithm. The addition of this localization step makes the algorithm quite robust and much more accurate. Moreover, we can control the accuracy of these algorithms by changing the numerical parameters. (3) We have considerably improved the Fermi operator expansion (FOE) approach. Through pole expansion, we have developed the optimal scaling FOE algorithm.

  17. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect (OSTI)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  18. A Variable Refrigerant Flow Heat Pump Computer Model in EnergyPlus

    SciTech Connect (OSTI)

    Raustad, Richard A.

    2013-01-01

    This paper provides an overview of the variable refrigerant flow heat pump computer model included with the Department of Energy's EnergyPlusTM whole-building energy simulation software. The mathematical model for a variable refrigerant flow heat pump operating in cooling or heating mode, and a detailed model for the variable refrigerant flow direct-expansion (DX) cooling coil are described in detail.

  19. Systems, methods and computer-readable media to model kinetic performance of rechargeable electrochemical devices

    DOE Patents [OSTI]

    Gering, Kevin L.

    2013-01-01

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a Butler-Volmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulse-time dependence, electrode surface availability, or a combination thereof. A set of sigmoid-based expressions may be included with the modified-BV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modified-BV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.

  20. Computationally Efficient Modeling of High-Efficiency Clean Combustion

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Engines | Department of Energy 0 DOE Vehicle Technologies and Hydrogen Programs Annual Merit Review and Peer Evaluation Meeting, June 7-11, 2010 -- Washington D.C. PDF icon ace012_aceves_2010_o.pdf More Documents & Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines

  1. LANL researchers use computer modeling to study HIV | National Nuclear

    National Nuclear Security Administration (NNSA)

    Security Administration researchers use computer modeling to study HIV | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo

  2. New partnership uses advanced computer science modeling to address climate

    National Nuclear Security Administration (NNSA)

    change | National Nuclear Security Administration partnership uses advanced computer science modeling to address climate change | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios

  3. Scientists use world's fastest computer to model materials under extreme

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    conditions Materials under extreme conditions Scientists use world's fastest computer to model materials under extreme conditions Materials scientists are for the first time attempting to create atomic-scale models that describe how voids are created, grow, and merge. October 30, 2009 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable

  4. Computer Modeling of Saltstone Landfills by Intera Environmental Consultants

    SciTech Connect (OSTI)

    Albenesius, E.L.

    2001-08-09

    This report summaries the computer modeling studies and how the results of these studies were used to estimate contaminant releases to the groundwater. These modeling studies were used to improve saltstone landfill designs and are the basis for the current reference design. With the reference landfill design, EPA Drinking Water Standards can be met for all chemicals and radionuclides contained in Savannah River Plant waste salts.

  5. Systems, methods and computer-readable media for modeling cell performance fade of rechargeable electrochemical devices

    DOE Patents [OSTI]

    Gering, Kevin L

    2013-08-27

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constant-current pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constant-current pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.

  6. A New Perspective for the Calibration of Computational Predictor Models.

    SciTech Connect (OSTI)

    Crespo, Luis Guillermo

    2014-11-01

    This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

  7. The origins of computer weather prediction and climate modeling

    SciTech Connect (OSTI)

    Lynch, Peter [Meteorology and Climate Centre, School of Mathematical Sciences, University College Dublin, Belfield (Ireland)], E-mail: Peter.Lynch@ucd.ie

    2008-03-20

    Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.

  8. Final Report: Center for Programming Models for Scalable Parallel Computing

    SciTech Connect (OSTI)

    Mellor-Crummey, John

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the leadership-class computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  9. Martin Karplus and Computer Modeling for Chemical Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Martin Karplus and Computer Modeling for Chemical Systems Resources with Additional Information * Karplus Equation Martin Karplus ©Portrait by N. Pitt, 9/10/03 Martin Karplus, the Theodore William Richards Professor of Chemistry Emeritus at Harvard, is one of three winners of the 2013 Nobel Prize in chemistry... The 83-year-old Vienna-born theoretical chemist, who is also affiliated with the Université de Strasbourg, Strasbourg, France, is a 1951 graduate of Harvard College and earned his

  10. ONSET OF CHAOS IN A MODEL OF QUANTUM COMPUTATION G. BERMAN; ET...

    Office of Scientific and Technical Information (OSTI)

    OF CHAOS IN A MODEL OF QUANTUM COMPUTATION G. BERMAN; ET AL 71 CLASSICAL AND QUANTUM MECHANICS, GENERAL PHYSICS; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, AND...

  11. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  12. Computer Modeling of Violent Intent: A Content Analysis Approach

    SciTech Connect (OSTI)

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  13. Models the Electromagnetic Response of a 3D Distribution using MP COMPUTERS

    Energy Science and Technology Software Center (OSTI)

    1999-05-01

    EM3D models the electromagnetic response of a 3D distribution of conductivity, dielectric permittivity and magnetic permeability within the earth for geophysical applications using massively parallel computers. The simulations are carried out in the frequency domain for either electric or magnetic sources for either scattered or total filed formulations of Maxwell''s equations. The solution is based on the method of finite differences and includes absorbing boundary conditions so that responses can be modeled up into themore »radar range where wave propagation is dominant. Recent upgrades in the software include the incorporation of finite size sources, that in addition to dipolar source fields, and a low induction number preconditioner that can significantly reduce computational run times. A graphical user interface (GUI) is bundled with the software so that complicated 3D models can be easily constructed and simulated with the software. The GUI also allows for plotting of the output.« less

  14. Final Report for Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect (OSTI)

    Glotzer, Sharon C.

    2013-08-28

    In collaboration with researchers at Vanderbilt University, North Carolina State University, Princeton and Oakridge National Laboratory we developed multiscale modeling and simulation methods capable of modeling the synthesis, assembly, and operation of molecular electronics devices. Our role in this project included the development of coarse-grained molecular and mesoscale models and simulation methods capable of simulating the assembly of millions of organic conducting molecules and other molecular components into nanowires, crossbars, and other organized patterns.

  15. Computer-Aided Construction of Chemical Kinetic Models

    SciTech Connect (OSTI)

    Green, William H.

    2014-12-31

    The combustion chemistry of even simple fuels can be extremely complex, involving hundreds or thousands of kinetically significant species. The most reasonable way to deal with this complexity is to use a computer not only to numerically solve the kinetic model, but also to construct the kinetic model in the first place. Because these large models contain so many numerical parameters (e.g. rate coefficients, thermochemistry) one never has sufficient data to uniquely determine them all experimentally. Instead one must work in predictive mode, using theoretical rather than experimental values for many of the numbers in the model, and as appropriate refining the most sensitive numbers through experiments. Predictive chemical kinetics is exactly what is needed for computer-aided design of combustion systems based on proposed alternative fuels, particularly for early assessment of the value and viability of proposed new fuels before those fuels are commercially available. This project was aimed at making accurate predictive chemical kinetics practical; this is a challenging goal which requires a range of science advances. The project spanned a wide range from quantum chemical calculations on individual molecules and elementary-step reactions, through the development of improved rate/thermo calculation procedures, the creation of algorithms and software for constructing and solving kinetic simulations, the invention of methods for model-reduction while maintaining error control, and finally comparisons with experiment. Many of the parameters in the models were derived from quantum chemistry calculations, and the models were compared with experimental data measured in our lab or in collaboration with others.

  16. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    SciTech Connect (OSTI)

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  17. Computational method and system for modeling, analyzing, and optimizing DNA amplification and synthesis

    DOE Patents [OSTI]

    Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.

    2010-05-04

    A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

  18. Compensator models for fluence field modulated computed tomography

    SciTech Connect (OSTI)

    Bartolac, Steven; Jaffray, David; Radiation Medicine Program, Princess Margaret Hospital Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9

    2013-12-15

    Purpose: Fluence field modulated computed tomography (FFMCT) presents a novel approach for acquiring CT images, whereby a patient model guides dynamically changing fluence patterns in an attempt to achieve task-based, user-prescribed, regional variations in image quality, while also controlling dose to the patient. This work aims to compare the relative effectiveness of FFMCT applied to different thoracic imaging tasks (routine diagnostic CT, lung cancer screening, and cardiac CT) when the modulator is subject to limiting constraints, such as might be present in realistic implementations.Methods: An image quality plan was defined for a simulated anthropomorphic chest slice, including regions of high and low image quality, for each of the thoracic imaging tasks. Modulated fluence patterns were generated using a simulated annealing optimization script, which attempts to achieve the image quality plan under a global dosimetric constraint. Optimization was repeated under different types of modulation constraints (e.g., fixed or gantry angle dependent patterns, continuous or comprised of discrete apertures) with the most limiting case being a fixed conventional bowtie filter. For each thoracic imaging task, an image quality map (IQM{sub sd}) representing the regionally varying standard deviation is predicted for each modulation method and compared to the prescribed image quality plan as well as against results from uniform fluence fields. Relative integral dose measures were also compared.Results: Each IQM{sub sd} resulting from FFMCT showed improved agreement with planned objectives compared to those from uniform fluence fields for all cases. Dynamically changing modulation patterns yielded better uniformity, improved image quality, and lower dose compared to fixed filter patterns with optimized tube current. For the latter fixed filter cases, the optimal choice of tube current modulation was found to depend heavily on the task. Average integral dose reduction compared to a uniform fluence field ranged from 10% using a bowtie filter to 40% or greater using an idealized modulator.Conclusions: The results support that FFMCT may achieve regionally varying image quality distributions in good agreement with user-prescribed values, while limiting dose. The imposition of constraints inhibits dose reduction capacity and agreement with image quality plans but still yields significant improvement over what is afforded by conventional dose minimization techniques. These results suggest that FFMCT can be implemented effectively even when the modulator has limited modulation capabilities.

  19. Modeling the Fracture of Ice Sheets on Parallel Computers

    SciTech Connect (OSTI)

    Waisman, Haim; Tuminaro, Ray

    2013-10-10

    The objective of this project was to investigate the complex fracture of ice and understand its role within larger ice sheet simulations and global climate change. This objective was achieved by developing novel physics based models for ice, novel numerical tools to enable the modeling of the physics and by collaboration with the ice community experts. At the present time, ice fracture is not explicitly considered within ice sheet models due in part to large computational costs associated with the accurate modeling of this complex phenomena. However, fracture not only plays an extremely important role in regional behavior but also influences ice dynamics over much larger zones in ways that are currently not well understood. To this end, our research findings through this project offers significant advancement to the field and closes a large gap of knowledge in understanding and modeling the fracture of ice sheets in the polar regions. Thus, we believe that our objective has been achieved and our research accomplishments are significant. This is corroborated through a set of published papers, posters and presentations at technical conferences in the field. In particular significant progress has been made in the mechanics of ice, fracture of ice sheets and ice shelves in polar regions and sophisticated numerical methods that enable the solution of the physics in an efficient way.

  20. Cielo Computational Environment Usage Model With Mappings to...

    Office of Scientific and Technical Information (OSTI)

    Cielo is a massively parallel supercomputer funded by the DOENNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale ...

  1. Computational fluid dynamic modeling of fluidized-bed polymerization reactors

    SciTech Connect (OSTI)

    Rokkam, Ram

    2012-11-02

    Polyethylene is one of the most widely used plastics, and over 60 million tons are produced worldwide every year. Polyethylene is obtained by the catalytic polymerization of ethylene in gas and liquid phase reactors. The gas phase processes are more advantageous, and use fluidized-bed reactors for production of polyethylene. Since they operate so close to the melting point of the polymer, agglomeration is an operational concern in all slurry and gas polymerization processes. Electrostatics and hot spot formation are the main factors that contribute to agglomeration in gas-phase processes. Electrostatic charges in gas phase polymerization fluidized bed reactors are known to influence the bed hydrodynamics, particle elutriation, bubble size, bubble shape etc. Accumulation of electrostatic charges in the fluidized-bed can lead to operational issues. In this work a first-principles electrostatic model is developed and coupled with a multi-fluid computational fluid dynamic (CFD) model to understand the effect of electrostatics on the dynamics of a fluidized-bed. The multi-fluid CFD model for gas-particle flow is based on the kinetic theory of granular flows closures. The electrostatic model is developed based on a fixed, size-dependent charge for each type of particle (catalyst, polymer, polymer fines) phase. The combined CFD model is first verified using simple test cases, validated with experiments and applied to a pilot-scale polymerization fluidized-bed reactor. The CFD model reproduced qualitative trends in particle segregation and entrainment due to electrostatic charges observed in experiments. For the scale up of fluidized bed reactor, filtered models are developed and implemented on pilot scale reactor.

  2. Computational model for simulation small testing launcher, technical solution

    SciTech Connect (OSTI)

    Chelaru, Teodor-Viorel; Cristian, Barbu; Chelaru, Adrian

    2014-12-10

    The purpose of this paper is to present some aspects regarding the computational model and technical solutions for multistage suborbital launcher for testing (SLT) used to test spatial equipment and scientific measurements. The computational model consists in numerical simulation of SLT evolution for different start conditions. The launcher model presented will be with six degrees of freedom (6DOF) and variable mass. The results analysed will be the flight parameters and ballistic performances. The discussions area will focus around the technical possibility to realize a small multi-stage launcher, by recycling military rocket motors. From technical point of view, the paper is focused on national project 'Suborbital Launcher for Testing' (SLT), which is based on hybrid propulsion and control systems, obtained through an original design. Therefore, while classical suborbital sounding rockets are unguided and they use as propulsion solid fuel motor having an uncontrolled ballistic flight, SLT project is introducing a different approach, by proposing the creation of a guided suborbital launcher, which is basically a satellite launcher at a smaller scale, containing its main subsystems. This is why the project itself can be considered an intermediary step in the development of a wider range of launching systems based on hybrid propulsion technology, which may have a major impact in the future European launchers programs. SLT project, as it is shown in the title, has two major objectives: first, a short term objective, which consists in obtaining a suborbital launching system which will be able to go into service in a predictable period of time, and a long term objective that consists in the development and testing of some unconventional sub-systems which will be integrated later in the satellite launcher as a part of the European space program. This is why the technical content of the project must be carried out beyond the range of the existing suborbital vehicle programs towards the current technological necessities in the space field, especially the European one.

  3. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  4. Computer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    I. INTRODUCTION This paper presents several computational tools required for processing images of a heavy ion beam and estimating the magnetic field within a plasma. The...

  5. Why applicants should use computer simulation models to comply with the FERC`s new merger policy

    SciTech Connect (OSTI)

    Frankena, M.W.; Morris, J.R.

    1997-02-01

    Computer models for electric utility use in complying with the US Federal Energy Regulatory Commission policy on mergers are described. Four types of simulation models that are widely used in the electric power industry are considered as tools for analyzing market power issues: dispatch/transportation models, dispatch/unit-commitment models, load-flow models, and load-flow/dispatch models. Basic model capabilities and limitations are described. Uses of the models for other purposes are also noted, including regulatory filings, antitrust litigation, and evaluation of pricing strategies.

  6. Computation Modeling and Assessment of Nanocoatings for Ultra Supercritical Boilers

    SciTech Connect (OSTI)

    J. Shingledecker; D. Gandy; N. Cheruvu; R. Wei; K. Chan

    2011-06-21

    Forced outages and boiler unavailability of coal-fired fossil plants is most often caused by fire-side corrosion of boiler waterwalls and tubing. Reliable coatings are required for Ultrasupercritical (USC) application to mitigate corrosion since these boilers will operate at a much higher temperatures and pressures than in supercritical (565 C {at} 24 MPa) boilers. Computational modeling efforts have been undertaken to design and assess potential Fe-Cr-Ni-Al systems to produce stable nanocrystalline coatings that form a protective, continuous scale of either Al{sub 2}O{sub 3} or Cr{sub 2}O{sub 3}. The computational modeling results identified a new series of Fe-25Cr-40Ni with or without 10 wt.% Al nanocrystalline coatings that maintain long-term stability by forming a diffusion barrier layer at the coating/substrate interface. The computational modeling predictions of microstructure, formation of continuous Al{sub 2}O{sub 3} scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. Advanced coatings, such as MCrAl (where M is Fe, Ni, or Co) nanocrystalline coatings, have been processed using different magnetron sputtering deposition techniques. Several coating trials were performed and among the processing methods evaluated, the DC pulsed magnetron sputtering technique produced the best quality coating with a minimum number of shallow defects and the results of multiple deposition trials showed that the process is repeatable. scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. The cyclic oxidation test results revealed that the nanocrystalline coatings offer better oxidation resistance, in terms of weight loss, localized oxidation, and formation of mixed oxides in the Al{sub 2}O{sub 3} scale, than widely used MCrAlY coatings. However, the ultra-fine grain structure in these coatings, consistent with the computational model predictions, resulted in accelerated Al diffusion from the coating into the substrate. An effective diffusion barrier interlayer coating was developed to prevent inward Al diffusion. The fire-side corrosion test results showed that the nanocrystalline coatings with a minimum number of defects have a great potential in providing corrosion protection. The coating tested in the most aggressive environment showed no evidence of coating spallation and/or corrosion attack after 1050 hours exposure. In contrast, evidence of coating spallation in isolated areas and corrosion attack of the base metal in the spalled areas were observed after 500 hours. These contrasting results after 500 and 1050 hours exposure suggest that the premature coating spallation in isolated areas may be related to the variation of defects in the coating between the samples. It is suspected that the cauliflower-type defects in the coating were presumably responsible for coating spallation in isolated areas. Thus, a defect free good quality coating is the key for the long-term durability of nanocrystalline coatings in corrosive environments. Thus, additional process optimization work is required to produce defect-free coatings prior to development of a coating application method for production parts.

  7. Accelerated Climate Modeling for Energy | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility An example of a Category 5 hurricane simulated by the CESM at 13 km resolution An example of a Category 5 hurricane simulated by the CESM at 13 km resolution. Precipitable water (gray scale) shows the detailed dynamical structure in the flow. Strong precipitation is overlaid in red. High resolution is necessary to simulate reasonable numbers of tropical cyclones including Category 4 and 5 storms. Alan Scott and Mark Taylor, Sandia National Laboratories Accelerated Climate Modeling

  8. Complex functionality with minimal computation. Promise and pitfalls of reduced-tracer ocean biogeochemistry models

    SciTech Connect (OSTI)

    Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh

    2015-12-21

    Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. Lastly, these results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate ‘‘sub-ecosystem-scale’’ parameterizations.

  9. Complex functionality with minimal computation. Promise and pitfalls of reduced-tracer ocean biogeochemistry models

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; et al

    2015-12-21

    Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded inmore » the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. Lastly, these results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate ‘‘sub-ecosystem-scale’’ parameterizations.« less

  10. Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors

    SciTech Connect (OSTI)

    Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2007-06-01

    Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.

  11. Computational Modeling and Assessment Of Nanocoatings for Ultra Supercritical Boilers

    SciTech Connect (OSTI)

    David W. Gandy; John P. Shingledecker

    2011-04-11

    Forced outages and boiler unavailability in conventional coal-fired fossil power plants is most often caused by fireside corrosion of boiler waterwalls. Industry-wide, the rate of wall thickness corrosion wastage of fireside waterwalls in fossil-fired boilers has been of concern for many years. It is significant that the introduction of nitrogen oxide (NOx) emission controls with staged burners systems has increased reported waterwall wastage rates to as much as 120 mils (3 mm) per year. Moreover, the reducing environment produced by the low-NOx combustion process is the primary cause of accelerated corrosion rates of waterwall tubes made of carbon and low alloy steels. Improved coatings, such as the MCrAl nanocoatings evaluated here (where M is Fe, Ni, and Co), are needed to reduce/eliminate waterwall damage in subcritical, supercritical, and ultra-supercritical (USC) boilers. The first two tasks of this six-task project-jointly sponsored by EPRI and the U.S. Department of Energy (DE-FC26-07NT43096)-have focused on computational modeling of an advanced MCrAl nanocoating system and evaluation of two nanocrystalline (iron and nickel base) coatings, which will significantly improve the corrosion and erosion performance of tubing used in USC boilers. The computational model results showed that about 40 wt.% is required in Fe based nanocrystalline coatings for long-term durability, leading to a coating composition of Fe-25Cr-40Ni-10 wt.% Al. In addition, the long term thermal exposure test results further showed accelerated inward diffusion of Al from the nanocrystalline coatings into the substrate. In order to enhance the durability of these coatings, it is necessary to develop a diffusion barrier interlayer coating such TiN and/or AlN. The third task 'Process Advanced MCrAl Nanocoating Systems' of the six-task project jointly sponsored by the Electric Power Research Institute, EPRI and the U.S. Department of Energy (DE-FC26-07NT43096)- has focused on processing of advanced nanocrystalline coating systems and development of diffusion barrier interlayer coatings. Among the diffusion interlayer coatings evaluated, the TiN interlayer coating was found to be the optimum one. This report describes the research conducted under the Task 3 workscope.

  12. Compare Energy Use in Variable Refrigerant Flow Heat Pumps Field Demonstration and Computer Model

    SciTech Connect (OSTI)

    Sharma, Chandan; Raustad, Richard

    2013-06-01

    Variable Refrigerant Flow (VRF) heat pumps are often regarded as energy efficient air-conditioning systems which offer electricity savings as well as reduction in peak electric demand while providing improved individual zone setpoint control. One of the key advantages of VRF systems is minimal duct losses which provide significant reduction in energy use and duct space. However, there is limited data available to show their actual performance in the field. Since VRF systems are increasingly gaining market share in the US, it is highly desirable to have more actual field performance data of these systems. An effort was made in this direction to monitor VRF system performance over an extended period of time in a US national lab test facility. Due to increasing demand by the energy modeling community, an empirical model to simulate VRF systems was implemented in the building simulation program EnergyPlus. This paper presents the comparison of energy consumption as measured in the national lab and as predicted by the program. For increased accuracy in the comparison, a customized weather file was created by using measured outdoor temperature and relative humidity at the test facility. Other inputs to the model included building construction, VRF system model based on lab measured performance, occupancy of the building, lighting/plug loads, and thermostat set-points etc. Infiltration model inputs were adjusted in the beginning to tune the computer model and then subsequent field measurements were compared to the simulation results. Differences between the computer model results and actual field measurements are discussed. The computer generated VRF performance closely resembled the field measurements.

  13. Review of the synergies between computational modeling and experimental characterization of materials across length scales

    SciTech Connect (OSTI)

    Dingreville, Rémi; Karnesky, Richard A.; Puel, Guillaume; Schmitt, Jean -Hubert

    2015-11-16

    With the increasing interplay between experimental and computational approaches at multiple length scales, new research directions are emerging in materials science and computational mechanics. Such cooperative interactions find many applications in the development, characterization and design of complex material systems. This manuscript provides a broad and comprehensive overview of recent trends in which predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure–property relationships and study various physical phenomena and mechanisms. The focus of this review is on the intersections of multiscale materials experiments and modeling relevant to the materials mechanics community. After a general discussion on the perspective from various communities, the article focuses on the latest experimental and theoretical opportunities. Emphasis is given to the role of experiments in multiscale models, including insights into how computations can be used as discovery tools for materials engineering, rather than to “simply” support experimental work. This is illustrated by examples from several application areas on structural materials. In conclusion this manuscript ends with a discussion on some problems and open scientific questions that are being explored in order to advance this relatively new field of research.

  14. Efficient Computation of Info-Gap Robustness for Finite Element Models

    SciTech Connect (OSTI)

    Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

  15. CASTING DEFECT MODELING IN AN INTEGRATED COMPUTATIONAL MATERIALS ENGINEERING APPROACH

    SciTech Connect (OSTI)

    Sabau, Adrian S [ORNL

    2015-01-01

    To accelerate the introduction of new cast alloys, the simultaneous modeling and simulation of multiphysical phenomena needs to be considered in the design and optimization of mechanical properties of cast components. The required models related to casting defects, such as microporosity and hot tears, are reviewed. Three aluminum alloys are considered A356, 356 and 319. The data on calculated solidification shrinkage is presented and its effects on microporosity levels discussed. Examples are given for predicting microporosity defects and microstructure distribution for a plate casting. Models to predict fatigue life and yield stress are briefly highlighted here for the sake of completion and to illustrate how the length scales of the microstructure features as well as porosity defects are taken into account for modeling the mechanical properties. Thus, the data on casting defects, including microstructure features, is crucial for evaluating the final performance-related properties of the component. ACKNOWLEDGEMENTS This work was performed under a Cooperative Research and Development Agreement (CRADA) with the Nemak Inc., and Chrysler Co. for the project "High Performance Cast Aluminum Alloys for Next Generation Passenger Vehicle Engines. The author would also like to thank Amit Shyam for reviewing the paper and Andres Rodriguez of Nemak Inc. Research sponsored by the U. S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office, as part of the Propulsion Materials Program under contract DE-AC05-00OR22725 with UT-Battelle, LLC. Part of this research was conducted through the Oak Ridge National Laboratory's High Temperature Materials Laboratory User Program, which is sponsored by the U. S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Program.

  16. Modeling-Computer Simulations At U.S. West Region (Sabin, Et...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At U.S. West Region (Sabin, Et Al., 2004) Exploration Activity Details...

  17. Modeling-Computer Simulations At Cove Fort Area (Toksoz, Et Al...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Cove Fort Area (Toksoz, Et Al, 2010) Exploration Activity Details...

  18. Modeling-Computer Simulations At U.S. West Region (Williams ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At U.S. West Region (Williams & Deangelo, 2008) Exploration Activity...

  19. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. ! Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13 Garching bei München, Germany, July 17, 2007 ScicomP 13, July 17, 2007, Garching Overview * About Bassi * Memory on Bassi * Large Page Memory (It's Great!) * System Configuration * Large Page

  20. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear

    1. High-Performance Computer Modeling of the Cosmos-Iridium Collision

      SciTech Connect (OSTI)

      Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W

      2009-08-28

      This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

    2. HYDRODYNAMIC MODELS FOR SLURRY BUBBLE COLUMN REACTORS. FINAL TECHNICAL REPORT ALSO INCLUDES THE QUARTERLY TECHNICAL REPORT FOR THE PERIOD 01/01/1997 - 03/31/1997.

      SciTech Connect (OSTI)

      DIMITRI GIDASPOW

      1997-08-15

      The objective of this study is to develop a predictive experimentally verified computational fluid dynamic (CFD) three phase model. It predicts the gas, liquid and solid hold-ups (volume fractions) and flow patterns in the industrially important bubble-coalesced (churn-turbulent) regime. The input into the model can be either particulate viscosities as measured with a Brookfield viscometer or effective restitution coefficient for particles. A combination of x-ray and {gamma}-ray densitometers was used to measure solid and liquid volume fractions. There is a fair agreement between the theory and the experiment. A CCD camera was used to measure instantaneous particle velocities. There is a good agreement between the computed time average velocities and the measurements. There is an excellent agreement between the viscosity of 800 {micro}m glass beads obtained from measurement of granular temperature (random kinetic energy of particles) and the measurement using a Brookfield viscometer. A relation between particle Reynolds stresses and granular temperature was found for developed flow. Such measurement and computations gave a restitution coefficient for a methanol catalyst to be about 0.9. A transient, two-dimensional hydrodynamic model for production of methanol from syn-gas in an Air Products/DOE LaPorte slurry bubble column reactor was developed. The model predicts downflow of catalyst at the walls and oscillatory particle and gas flow at the center, with a frequency of about 0.7 Hertz. The computed temperature variation in the rector with heat exchangers was only about 5 K, indicating good thermal management. The computed slurry height, the gas holdup and the rate of methanol production agree with LaPorte's reported data. Unlike the previous models in the literature, this model computes the gas and the particle holdups and the particle rheology. The only adjustable parameter in the model is the effective particle restitution coefficient.

    3. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 1: Theory and Computational Model

      SciTech Connect (OSTI)

      Nichols, B.D.; Mueller, C.; Necker, G.A.; Travis, J.R.; Spore, J.W.; Lam, K.L.; Royl, P.; Redlinger, R.; Wilson, T.L.

      1998-10-01

      Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior (1) in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and (2) during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume III contains some of the assessments performed by LANL and FzK. GASFLOW is under continual development, assessment, and application by LANL and FzK. This manual is considered a living document and will be updated as warranted.

    4. Improvements in fast-response flood modeling: desktop parallel computing and domain tracking

      SciTech Connect (OSTI)

      Judi, David R; Mcpherson, Timothy N; Burian, Steven J

      2009-01-01

      It is becoming increasingly important to have the ability to accurately forecast flooding, as flooding accounts for the most losses due to natural disasters in the world and the United States. Flood inundation modeling has been dominated by one-dimensional approaches. These models are computationally efficient and are considered by many engineers to produce reasonably accurate water surface profiles. However, because the profiles estimated in these models must be superimposed on digital elevation data to create a two-dimensional map, the result may be sensitive to the ability of the elevation data to capture relevant features (e.g. dikes/levees, roads, walls, etc...). Moreover, one-dimensional models do not explicitly represent the complex flow processes present in floodplains and urban environments and because two-dimensional models based on the shallow water equations have significantly greater ability to determine flow velocity and direction, the National Research Council (NRC) has recommended that two-dimensional models be used over one-dimensional models for flood inundation studies. This paper has shown that two-dimensional flood modeling computational time can be greatly reduced through the use of Java multithreading on multi-core computers which effectively provides a means for parallel computing on a desktop computer. In addition, this paper has shown that when desktop parallel computing is coupled with a domain tracking algorithm, significant computation time can be eliminated when computations are completed only on inundated cells. The drastic reduction in computational time shown here enhances the ability of two-dimensional flood inundation models to be used as a near-real time flood forecasting tool, engineering, design tool, or planning tool. Perhaps even of greater significance, the reduction in computation time makes the incorporation of risk and uncertainty/ensemble forecasting more feasible for flood inundation modeling (NRC 2000; Sayers et al. 2000).

    5. COMPUTATIONAL FLUID DYNAMICS MODELING OF SCALED HANFORD DOUBLE SHELL TANK MIXING - CFD MODELING SENSITIVITY STUDY RESULTS

      SciTech Connect (OSTI)

      JACKSON VL

      2011-08-31

      The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

    6. Validation of the thermospheric vector spherical harmonic (VSH) computer model. Master's thesis

      SciTech Connect (OSTI)

      Davis, J.L.

      1991-01-01

      A semi-empirical computer model of the lower thermosphere has been developed that provides a description of the composition and dynamics of the thermosphere (Killeen et al., 1992). Input variables needed to run the VSH model include time, space and geophysical conditions. One of the output variables the model provides, neutral density, is of particular interest to the U.S. Air Force. Neutral densities vary both as a result of change in solar flux (eg. the solar cycle) and as a result of changes in the magnetosphere (eg. large changes occur in neutral density during geomagnetic storms). Satellites in earth orbit experience aerodynamic drag due to the atmospheric density of the thermosphere. Variability in the neutral density described above affects the drag a satellite experiences and as a result can change the orbital characteristics of the satellite. These changes make it difficult to track the satellite's position. Therefore, it is particularly important to insure that the accuracy of the model's neutral density is optimized for all input parameters. To accomplish this, a validation program was developed to evaluate the strengths and weaknesses of the model's density output by comparing it to SETA-2 (satellite electrostatic accelerometer) total mass density measurements.

    7. Systems, Methods and Computer Readable Media for Modeling Cell...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      INL has developed a set of methods to define measure, evaluate, track and predict performance and aging trends for advanced chemistry batteries, including lithium-ion batteries. ...

    8. Modeling-Computer Simulations (Laney, 2005) | Open Energy Information

      Open Energy Info (EERE)

      in the near surface: Available technologies for monitoring CO2 in the near-surface environment include (1) the infrared gas analyzer (IRGA) for measurement of concentrations at...

    9. Mathematical modeling and computer simulation of processes in energy systems

      SciTech Connect (OSTI)

      Hanjalic, K.C. )

      1990-01-01

      This book is divided into the following chapters. Modeling techniques and tools (fundamental concepts of modeling); 2. Fluid flow, heat and mass transfer, chemical reactions, and combustion; 3. Processes in energy equipment and plant components (boilers, steam and gas turbines, IC engines, heat exchangers, pumps and compressors, nuclear reactors, steam generators and separators, energy transport equipment, energy convertors, etc.); 4. New thermal energy conversion technologies (MHD, coal gasification and liquefaction fluidized-bed combustion, pulse-combustors, multistage combustion, etc.); 5. Combined cycles and plants, cogeneration; 6. Dynamics of energy systems and their components; 7. Integrated approach to energy systems modeling, and 8. Application of modeling in energy expert systems.

    10. Modeling-Computer Simulations (Walker, Et Al., 2005) | Open Energy...

      Open Energy Info (EERE)

      occurrence model for geothermal systems based on fundamental geologic data. References J. D. Walker, A. E. Sabin, J. R. Unruh, J. Combs, F. C. Monastero (2005) Development Of...

    11. Modeling-Computer Simulations At Kilauea East Rift Geothermal...

      Open Energy Info (EERE)

      importance of water convection for distributing heat in the East Rift Zone. References Albert J. Rudman, David Epp (1983) Conduction Models Of The Temperature Distribution In The...

    12. Computer-Aided Construction of Combustion Chemistry Models

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Constructing Accurate Combustion Chemistry Models: Butanols William H. Green & Michael Harper MIT Dept. of Chem. Eng. CEFRC Annual Meeting, Sept. 2010 The people who did this work:...

    13. Modeling and Analysis of a Lunar Space Reactor with the Computer Code

      Office of Scientific and Technical Information (OSTI)

      RELAP5-3D/ATHENA (Conference) | SciTech Connect Conference: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA Citation Details In-Document Search Title: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE

    14. Computer support to run models of the atmosphere. Final report

      SciTech Connect (OSTI)

      Fung, I.

      1996-08-30

      This research is focused on a better quantification of the variations in CO{sub 2} exchanges between the atmosphere and biosphere and the factors responsible for these exchangers. The principal approach is to infer the variations in the exchanges from variations in the atmospheric CO{sub 2} distribution. The principal tool involves using a global three-dimensional tracer transport model to advect and convect CO{sub 2} in the atmosphere. The tracer model the authors used was developed at the Goddard institute for Space Studies (GISS) and is derived from the GISS atmospheric general circulation model. A special run of the GCM is made to save high-frequency winds and mixing statistics for the tracer model.

    15. Modeling-Computer Simulations (Gritto & Majer) | Open Energy...

      Open Energy Info (EERE)

      are shown in Figure 1. The parameters of the fault were modeled after Coates and Schoenberg (1995), where the orientation of the fault relative to the finite-difference grid...

    16. FINITE ELEMENT MODELS FOR COMPUTING SEISMIC INDUCED SOIL PRESSURES ON DEEPLY EMBEDDED NUCLEAR POWER PLANT STRUCTURES.

      SciTech Connect (OSTI)

      XU, J.; COSTANTINO, C.; HOFMAYER, C.

      2006-06-26

      PAPER DISCUSSES COMPUTATIONS OF SEISMIC INDUCED SOIL PRESSURES USING FINITE ELEMENT MODELS FOR DEEPLY EMBEDDED AND OR BURIED STIFF STRUCTURES SUCH AS THOSE APPEARING IN THE CONCEPTUAL DESIGNS OF STRUCTURES FOR ADVANCED REACTORS.

    17. Theoretical and computer models of detonation in solid explosives

      SciTech Connect (OSTI)

      Tarver, C.M.; Urtiew, P.A.

      1997-10-01

      Recent experimental and theoretical advances in understanding energy transfer and chemical kinetics have led to improved models of detonation waves in solid explosives. The Nonequilibrium Zeldovich - von Neumann - Doring (NEZND) model is supported by picosecond laser experiments and molecular dynamics simulations of the multiphonon up-pumping and internal vibrational energy redistribution (IVR) processes by which the unreacted explosive molecules are excited to the transition state(s) preceding reaction behind the leading shock front(s). High temperature, high density transition state theory calculates the induction times measured by laser interferometric techniques. Exothermic chain reactions form product gases in highly excited vibrational states, which have been demonstrated to rapidly equilibrate via supercollisions. Embedded gauge and Fabry-Perot techniques measure the rates of reaction product expansion as thermal and chemical equilibrium is approached. Detonation reaction zone lengths in carbon-rich condensed phase explosives depend on the relatively slow formation of solid graphite or diamond. The Ignition and Growth reactive flow model based on pressure dependent reaction rates and Jones-Wilkins-Lee (JWL) equations of state has reproduced this nanosecond time resolved experimental data and thus has yielded accurate average reaction zone descriptions in one-, two- and three- dimensional hydrodynamic code calculations. The next generation reactive flow model requires improved equations of state and temperature dependent chemical kinetics. Such a model is being developed for the ALE3D hydrodynamic code, in which heat transfer and Arrhenius kinetics are intimately linked to the hydrodynamics.

    18. Computer Modeling VRF Heat Pumps in Commercial Buildings using EnergyPlus

      SciTech Connect (OSTI)

      Raustad, Richard

      2013-06-01

      Variable Refrigerant Flow (VRF) heat pumps are increasingly used in commercial buildings in the United States. Monitored energy use of field installations have shown, in some cases, savings exceeding 30% compared to conventional heating, ventilating, and air-conditioning (HVAC) systems. A simulation study was conducted to identify the installation or operational characteristics that lead to energy savings for VRF systems. The study used the Department of Energy EnergyPlus? building simulation software and four reference building models. Computer simulations were performed in eight U.S. climate zones. The baseline reference HVAC system incorporated packaged single-zone direct-expansion cooling with gas heating (PSZ-AC) or variable-air-volume systems (VAV with reheat). An alternate baseline HVAC system using a heat pump (PSZ-HP) was included for some buildings to directly compare gas and electric heating results. These baseline systems were compared to a VRF heat pump model to identify differences in energy use. VRF systems combine multiple indoor units with one or more outdoor unit(s). These systems move refrigerant between the outdoor and indoor units which eliminates the need for duct work in most cases. Since many applications install duct work in unconditioned spaces, this leads to installation differences between VRF systems and conventional HVAC systems. To characterize installation differences, a duct heat gain model was included to identify the energy impacts of installing ducts in unconditioned spaces. The configuration of variable refrigerant flow heat pumps will ultimately eliminate or significantly reduce energy use due to duct heat transfer. Fan energy is also studied to identify savings associated with non-ducted VRF terminal units. VRF systems incorporate a variable-speed compressor which may lead to operational differences compared to single-speed compression systems. To characterize operational differences, the computer model performance curves used to simulate cooling operation are also evaluated. The information in this paper is intended to provide a relative difference in system energy use and compare various installation practices that can impact performance. Comparative results of VRF versus conventional HVAC systems include energy use differences due to duct location, differences in fan energy when ducts are eliminated, and differences associated with electric versus fossil fuel type heating systems.

    19. Computer model for characterizing, screening, and optimizing electrolyte systems

      SciTech Connect (OSTI)

      2015-06-15

      Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced models are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.

    20. COMPUTATIONAL THERMODYNAMIC MODELING OF HOT CORROSION OF ALLLOYS HAYNES 242 AND HASTELLOYTMN FOR MOLTEN SALT SERVICE

      SciTech Connect (OSTI)

      Michael V. Glazoff; Piyush Sabharwall; Akira Tokuhiro

      2014-09-01

      An evaluation of thermodynamic aspects of hot corrosion of the superalloys Haynes 242 and HastelloyTM N in the eutectic mixtures of KF and ZrF4 is carried out for development of Advanced High Temperature Reactor (AHTR). This work models the behavior of several superalloys, potential candidates for the AHTR, using computational thermodynamics tool (ThermoCalc), leading to the development of thermodynamic description of the molten salt eutectic mixtures, and on that basis, mechanistic prediction of hot corrosion. The results from these studies indicated that the principal mechanism of hot corrosion was associated with chromium leaching for all of the superalloys described above. However, HastelloyTM N displayed the best hot corrosion performance. This was not surprising given it was developed originally to withstand the harsh conditions of molten salt environment. However, the results obtained in this study provided confidence in the employed methods of computational thermodynamics and could be further used for future alloy design efforts. Finally, several potential solutions to mitigate hot corrosion were proposed for further exploration, including coating development and controlled scaling of intermediate compounds in the KF-ZrF4 system.

    1. Emergency Response Equipment and Related Training: Airborne Radiological Computer System (Model II)

      SciTech Connect (OSTI)

      David P. Colton

      2007-02-28

      The materials included in the Airborne Radiological Computer System, Model-II (ARCS-II) were assembled with several considerations in mind. First, the system was designed to measure and record the airborne gamma radiation levels and the corresponding latitude and longitude coordinates, and to provide a first overview look of the extent and severity of an accident's impact. Second, the portable system had to be light enough and durable enough that it could be mounted in an aircraft, ground vehicle, or watercraft. Third, the system must control the collection and storage of the data, as well as provide a real-time display of the data collection results to the operator. The notebook computer and color graphics printer components of the system would only be used for analyzing and plotting the data. In essence, the provided equipment is composed of an acquisition system and an analysis system. The data can be transferred from the acquisition system to the analysis system at the end of the data collection or at some other agreeable time.

    2. Computational modeling of drug-resistant bacteria. Final report

      SciTech Connect (OSTI)

      MacDougall, Preston

      2015-03-12

      Initial proposal summary: The evolution of antibiotic-resistant mutants among bacteria (superbugs) is a persistent and growing threat to public health. In many ways, we are engaged in a war with these microorganisms, where the corresponding arms race involves chemical weapons and biological targets. Just as advances in microelectronics, imaging technology and feature recognition software have turned conventional munitions into smart bombs, the long-term objectives of this proposal are to develop highly effective antibiotics using next-generation biomolecular modeling capabilities in tandem with novel subatomic feature detection software. Using model compounds and targets, our design methodology will be validated with correspondingly ultra-high resolution structure-determination methods at premier DOE facilities (single-crystal X-ray diffraction at Argonne National Laboratory, and neutron diffraction at Oak Ridge National Laboratory). The objectives and accomplishments are summarized.

    3. Computational models for the berry phase in semiconductor quantum dots

      SciTech Connect (OSTI)

      Prabhakar, S. Melnik, R. V. N.; Sebetci, A.

      2014-10-06

      By developing a new model and its finite element implementation, we analyze the Berry phase low-dimensional semiconductor nanostructures, focusing on quantum dots (QDs). In particular, we solve the Schrdinger equation and investigate the evolution of the spin dynamics during the adiabatic transport of the QDs in the 2D plane along circular trajectory. Based on this study, we reveal that the Berry phase is highly sensitive to the Rashba and Dresselhaus spin-orbit lengths.

    4. Computer modeling of a CFB (circulating fluidized bed) gasifier

      SciTech Connect (OSTI)

      Gidaspow, D.; Ding, J.

      1990-06-01

      The overall objective of this investigation is to develop experimentally verified models for circulating fluidized bed (CFB) combustors. This report presents an extension of our cold flow modeling of a CFB given in our first quarterly report of this project and published in Numerical Methods for Multiphase Flows'' edited by I. Celik, D. Hughes, C. T. Crowe and D. Lankford, FED-Vol.91, American Society of Mechanical Engineering, pp47--56 (1990). The title of the paper is Multiphase Navier-Stokes Equation Solver'' by D. Gidaspow, J. Ding and U.K. Jayaswal. To the two dimensional code described in the above paper we added the energy equations and the conservation of species equations to describe a synthesis gas from char producer. Under the simulation conditions the injected oxygen reacted near the inlet. The solid-gas mixing was sufficiently rapid that no undesirable hot spots were produced. This simulation illustrates the code's capability to model CFB reactors. 15 refs., 20 figs.

    5. Computer model for characterizing, screening, and optimizing electrolyte systems

      Energy Science and Technology Software Center (OSTI)

      2015-06-15

      Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced modelsmore » are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.« less

    6. DEVELOPMENT OF PLASTICITY MODEL USING NON ASSOCIATED FLOW RULE FOR HCP MATERIALS INCLUDING ZIRCONIUM FOR NUCLEAR APPLICATIONS

      SciTech Connect (OSTI)

      Michael V. Glazoff; Jeong-Whan Yoon

      2013-08-01

      In this report (prepared in collaboration with Prof. Jeong Whan Yoon, Deakin University, Melbourne, Australia) a research effort was made to develop a non associated flow rule for zirconium. Since Zr is a hexagonally close packed (hcp) material, it is impossible to describe its plastic response under arbitrary loading conditions with any associated flow rule (e.g. von Mises). As a result of strong tension compression asymmetry of the yield stress and anisotropy, zirconium displays plastic behavior that requires a more sophisticated approach. Consequently, a new general asymmetric yield function has been developed which accommodates mathematically the four directional anisotropies along 0 degrees, 45 degrees, 90 degrees, and biaxial, under tension and compression. Stress anisotropy has been completely decoupled from the r value by using non associated flow plasticity, where yield function and plastic potential have been treated separately to take care of stress and r value directionalities, respectively. This theoretical development has been verified using Zr alloys at room temperature as an example as these materials have very strong SD (Strength Differential) effect. The proposed yield function reasonably well models the evolution of yield surfaces for a zirconium clock rolled plate during in plane and through thickness compression. It has been found that this function can predict both tension and compression asymmetry mathematically without any numerical tolerance and shows the significant improvement compared to any reported functions. Finally, in the end of the report, a program of further research is outlined aimed at constructing tensorial relationships for the temperature and fluence dependent creep surfaces for Zr, Zircaloy 2, and Zircaloy 4.

    7. Computational Human Performance Modeling For Alarm System Design

      SciTech Connect (OSTI)

      Jacques Hugo

      2012-07-01

      The introduction of new technologies like adaptive automation systems and advanced alarms processing and presentation techniques in nuclear power plants is already having an impact on the safety and effectiveness of plant operations and also the role of the control room operator. This impact is expected to escalate dramatically as more and more nuclear power utilities embark on upgrade projects in order to extend the lifetime of their plants. One of the most visible impacts in control rooms will be the need to replace aging alarm systems. Because most of these alarm systems use obsolete technologies, the methods, techniques and tools that were used to design the previous generation of alarm system designs are no longer effective and need to be updated. The same applies to the need to analyze and redefine operators alarm handling tasks. In the past, methods for analyzing human tasks and workload have relied on crude, paper-based methods that often lacked traceability. New approaches are needed to allow analysts to model and represent the new concepts of alarm operation and human-system interaction. State-of-the-art task simulation tools are now available that offer a cost-effective and efficient method for examining the effect of operator performance in different conditions and operational scenarios. A discrete event simulation system was used by human factors researchers at the Idaho National Laboratory to develop a generic alarm handling model to examine the effect of operator performance with simulated modern alarm system. It allowed analysts to evaluate alarm generation patterns as well as critical task times and human workload predicted by the system.

    8. CORCON-MOD3: An integrated computer model for analysis of molten core-concrete interactions. User`s manual

      SciTech Connect (OSTI)

      Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O.

      1993-10-01

      The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.

    9. PLUVIUS: a generalized one-dimensional model of reactive pollutant behavior, including dry deposition, precipitation formation, and wet removal. Second edition

      SciTech Connect (OSTI)

      Easter, R.C.; Hales, J.M.

      1984-11-01

      This report is a second-edition user's manual for the PLUVIUS reactive-storm model. The PLUVIUS code simulates the formation of storm systems of a variety of types, and characterizes the behavior of air pollutants as they flow through, react within, and are scavenged by the storms. The computer code supplied with this report is known as PLUVIUS MOD 5.0, and is a substantial improvement over the MOD 3.1 version given in the original user's manual. Example applications of MOD 5.0 are given in the report to facilitate rapid application of the code for a variety of specific uses. 22 references, 7 figures, 48 tables.

    10. R&D for computational cognitive and social models : foundations for model evaluation through verification and validation (final LDRD report).

      SciTech Connect (OSTI)

      Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.; McNamara, Laura A.; Trucano, Timothy Guy

      2008-09-01

      Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making.

    11. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model,...

    12. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop....

    13. Verification of a VRF Heat Pump Computer Model in EnergyPlus

      SciTech Connect (OSTI)

      Nigusse, Bereket; Raustad, Richard

      2013-06-01

      This paper provides verification results of the EnergyPlus variable refrigerant flow (VRF) heat pump computer model using manufacturer's performance data. The paper provides an overview of the VRF model, presents the verification methodology, and discusses the results. The verification provides quantitative comparison of full and part-load performance to manufacturer's data in cooling-only and heating-only modes of operation. The VRF heat pump computer model uses dual range bi-quadratic performance curves to represent capacity and Energy Input Ratio (EIR) as a function of indoor and outdoor air temperatures, and dual range quadratic performance curves as a function of part-load-ratio for modeling part-load performance. These performance curves are generated directly from manufacturer's published performance data. The verification compared the simulation output directly to manufacturer's performance data, and found that the dual range equation fit VRF heat pump computer model predicts the manufacturer's performance data very well over a wide range of indoor and outdoor temperatures and part-load conditions. The predicted capacity and electric power deviations are comparbale to equation-fit HVAC computer models commonly used for packaged and split unitary HVAC equipment.

    14. A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers

      SciTech Connect (OSTI)

      Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan; Parker, Jack C.; Watson, David B; Jardine, Philip M

      2010-01-01

      Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code, the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.

    15. Briefing package for the Yucca Flat pre-emptive review, including overview, UZ model, SZ volcanics model and summary and conclusions sections

      SciTech Connect (OSTI)

      Kwicklis, Edward Michael [Los Alamos National Laboratory; Keating, Elizabeth H [Los Alamos National Laboratory

      2010-12-02

      Much progress has been made in the last several years in modeling radionuclide transport from tests conducted both in the unsaturated zone and saturated volcanic rocks of Yucca Flat, Nevada. The presentations to the DOE NNSA pre-emptive review panel contained herein document the progress to date, and discuss preliminary conclusions regarding the present and future extents of contamination resulting from past nuclear tests. The presentations also discuss possible strategies for addressing uncertainty in the model results.

    16. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

      SciTech Connect (OSTI)

      Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

      2009-10-12

      In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

    17. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    18. DOE Issues Funding Opportunity for Advanced Computational and Modeling Research for the Electric Power System

      Broader source: Energy.gov [DOE]

      The objective of this Funding Opportunity Announcement (FOA) is to leverage scientific advancements in mathematics and computation for application to power system models and software tools, with the long-term goal of enabling real-time protection and control based on wide-area sensor measurements.

    19. Technical Review of the CENWP Computational Fluid Dynamics Model of the John Day Dam Forebay

      SciTech Connect (OSTI)

      Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.

      2010-12-01

      The US Army Corps of Engineers Portland District (CENWP) has developed a computational fluid dynamics (CFD) model of the John Day forebay on the Columbia River to aid in the development and design of alternatives to improve juvenile salmon passage at the John Day Project. At the request of CENWP, Pacific Northwest National Laboratory (PNNL) Hydrology Group has conducted a technical review of CENWP's CFD model run in CFD solver software, STAR-CD. PNNL has extensive experience developing and applying 3D CFD models run in STAR-CD for Columbia River hydroelectric projects. The John Day forebay model developed by CENWP is adequately configured and validated. The model is ready for use simulating forebay hydraulics for structural and operational alternatives. The approach and method are sound, however CENWP has identified some improvements that need to be made for future models and for modifications to this existing model.

    20. Computer modeling of electromagnetic edge containment in twin-roll casting

      SciTech Connect (OSTI)

      Chang, F.C.; Turner, L.R.; Hull, J.R.; Wang, Y.H.; Blazek, K.E.

      1998-07-01

      This paper presents modeling studies of magnetohydrodynamics (MHD) analysis in twin-roll casting. Argonne National Laboratory (ANL) and Inland Steel Company have worked together to develop a 3-D computer model that can predict eddy currents, fluid flows, and liquid metal containment for an electromagnetic (EM) edge containment device. This mathematical model can greatly shorten casting research on the use of EM fields for liquid metal containment and control. It can also optimize the existing casting processes and minimize expensive, time-consuming full-scale testing. The model was verified by comparing predictions with experimental results of liquid-metal containment and fluid flow in EM edge dams designed at Inland Steel for twin-roll casting. Numerical simulation was performed by coupling a three-dimensional (3-D) finite-element EM code (ELEKTRA) and a 3-D finite-difference fluids code (CaPS-EM) to solve Maxwell`s equations, Ohm`s law, Navier-Stokes equations, and transport equations of turbulence flow in a casting process that uses EM fields. ELEKTRA is able to predict the eddy-current distribution and electromagnetic forces in complex geometry. CaPS-EM is capable of modeling fluid flows with free-surfaces and dynamic rollers. The computed 3-D magnetic fields and induced eddy currents in ELEKTRA are used as input to flow-field computations in CaPS-EM. Results of the numerical simulation compared well with measurements obtained from both static and dynamic tests.

    1. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

      SciTech Connect (OSTI)

      Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

      2014-11-01

      The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).

    2. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

      2014-11-01

      The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation packagemorecapable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).less

    3. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

      SciTech Connect (OSTI)

      Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

      2011-06-01

      This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the experienced user-base and the experimental validation base was decaying away quickly.

    4. DualTrust: A Trust Management Model for Swarm-Based Autonomic Computing Systems

      SciTech Connect (OSTI)

      Maiden, Wendy M.

      2010-05-01

      Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm -- their lightweight, ephemeral nature and indirect communication -- make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarm-based autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarm-based autonomic computing system and describes a trust model that meets these requirements.

    5. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and...

    6. Superior model for fault tolerance computation in designing nano-sized circuit systems

      SciTech Connect (OSTI)

      Singh, N. S. S. Muthuvalu, M. S.; Asirvadam, V. S.

      2014-10-24

      As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalization of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.

    7. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

      2014-07-28

      The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

    8. Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop W. Musial, M. Lawson, and S. Rooney National Renewable Energy Laboratory Technical Report NREL/TP-5000-57605 February 2013 NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency & Renewable Energy, operated by the Alliance for Sustainable Energy, LLC. National Renewable Energy Laboratory 15013 Denver West Parkway Golden, Colorado 80401 303-275-3000 *

    9. NREL Computer Models Integrate Wind Turbines with Floating Platforms (Fact Sheet)

      SciTech Connect (OSTI)

      Not Available

      2011-07-01

      Far off the shores of energy-hungry coastal cities, powerful winds blow over the open ocean, where the water is too deep for today's seabed-mounted offshore wind turbines. For the United States to tap into these vast offshore wind energy resources, wind turbines must be mounted on floating platforms to be cost effective. Researchers at the National Renewable Energy Laboratory (NREL) are supporting that development with computer models that allow detailed analyses of such floating wind turbines.

    10. Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop W. Musial, M. Lawson, and S. Rooney National Renewable Energy Laboratory Technical Report NREL/TP-5000-57605 February 2013 NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency & Renewable Energy, operated by the Alliance for Sustainable Energy, LLC. National Renewable Energy Laboratory 15013 Denver West Parkway Golden, Colorado 80401 303-275-3000 *

    11. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

      SciTech Connect (OSTI)

      Zhou, Shujia; Duffy, Daniel; Clune, Thomas; Suarez, Max; Williams, Samuel; Halem, Milton

      2009-01-10

      The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratio of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.

    12. Computer modeling of electromagnetic fields and fluid flows for edge containment in continuous casting

      SciTech Connect (OSTI)

      Chang, F.C.; Hull, J.R.; Wang, Y.H.; Blazek, K.E.

      1996-02-01

      A computer model was developed to predict eddy currents and fluid flows in molten steel. The model was verified by comparing predictions with experimental results of liquid-metal containment and fluid flow in electromagnetic (EM) edge dams (EMDs) designed at Inland Steel for twin-roll casting. The model can optimize the EMD design so it is suitable for application, and minimize expensive, time-consuming full-scale testing. Numerical simulation was performed by coupling a three-dimensional (3-D) finite-element EM code (ELEKTRA) and a 3-D finite-difference fluids code (CaPS-EM) to solve heat transfer, fluid flow, and turbulence transport in a casting process that involves EM fields. ELEKTRA is able to predict the eddy- current distribution and the electromagnetic forces in complex geometries. CaPS-EM is capable of modeling fluid flows with free surfaces. Results of the numerical simulation compared well with measurements obtained from a static test.

    13. Computational fluid dynamics modeling of coal gasification in a pressurized spout-fluid bed

      SciTech Connect (OSTI)

      Zhongyi Deng; Rui Xiao; Baosheng Jin; He Huang; Laihong Shen; Qilei Song; Qianjun Li

      2008-05-15

      Computational fluid dynamics (CFD) modeling, which has recently proven to be an effective means of analysis and optimization of energy-conversion processes, has been extended to coal gasification in this paper. A 3D mathematical model has been developed to simulate the coal gasification process in a pressurized spout-fluid bed. This CFD model is composed of gas-solid hydrodynamics, coal pyrolysis, char gasification, and gas phase reaction submodels. The rates of heterogeneous reactions are determined by combining Arrhenius rate and diffusion rate. The homogeneous reactions of gas phase can be treated as secondary reactions. A comparison of the calculated and experimental data shows that most gasification performance parameters can be predicted accurately. This good agreement indicates that CFD modeling can be used for complex fluidized beds coal gasification processes. 37 refs., 7 figs., 5 tabs.

    14. An integrated computer modeling environment for regional land use, air quality, and transportation planning

      SciTech Connect (OSTI)

      Hanley, C.J.; Marshall, N.L.

      1997-04-01

      The Land Use, Air Quality, and Transportation Integrated Modeling Environment (LATIME) represents an integrated approach to computer modeling and simulation of land use allocation, travel demand, and mobile source emissions for the Albuquerque, New Mexico, area. This environment provides predictive capability combined with a graphical and geographical interface. The graphical interface shows the causal relationships between data and policy scenarios and supports alternative model formulations. Scenarios are launched from within a Geographic Information System (GIS), and data produced by each model component at each time step within a simulation is stored in the GIS. A menu-driven query system is utilized to review link-based results and regional and area-wide results. These results can also be compared across time or between alternative land use scenarios. Using this environment, policies can be developed and implemented based on comparative analysis, rather than on single-step future projections. 16 refs., 3 figs., 2 tabs.

    15. MHK technology developments include current

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      technology developments include current energy conversion (CEC) devices, for example, hydrokinetic turbines that extract power from water currents (riverine, tidal, and ocean) and wave energy conversion (WEC) devices that extract power from wave motion. Sandia's MHK research leverages decades of experience in engineering, design, and analysis of wind power technologies, and its vast research complex, including high- performance computing (HPC), advanced materials and coatings, nondestructive

    16. Use of model calibration to achieve high accuracy in analysis of computer networks

      DOE Patents [OSTI]

      Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

      2004-05-11

      A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

    17. Rapidly re-computable EEG (electroencephalography) forward models for realistic head shapes

      SciTech Connect (OSTI)

      Ermer, J. J.; Mosher, J. C.; Baillet, S.; Leahy, R. M.

      2001-01-01

      Solution of the EEG source localization (inverse) problem utilizing model-based methods typically requires a significant number of forward model evaluations. For subspace based inverse methods like MUSIC [6], the total number of forward model evaluations can often approach an order of 10{sup 3} or 10{sup 4}. Techniques based on least-squares minimization may require significantly more evaluations. The observed set of measurements over an M-sensor array is often expressed as a linear forward spatio-temporal model of the form: F = GQ + N (1) where the observed forward field F (M-sensors x N-time samples) can be expressed in terms of the forward model G, a set of dipole moment(s) Q (3xP-dipoles x N-time samples) and additive noise N. Because of their simplicity, ease of computation, and relatively good accuracy, multi-layer spherical models [7] (or fast approximations described in [1], [7]) have traditionally been the 'forward model of choice' for approximating the human head. However, approximation of the human head via a spherical model does have several key drawbacks. By its very shape, the use of a spherical model distorts the true distribution of passive currents in the skull cavity. Spherical models also require that the sensor positions be projected onto the fitted sphere (Fig. 1), resulting in a distortion of the true sensor-dipole spatial geometry (and ultimately the computed surface potential). The use of a single 'best-fitted' sphere has the added drawback of incomplete coverage of the inner skull region, often ignoring areas such as the frontal cortex. In practice, this problem is typically countered by fitting additional sphere(s) to those region(s) not covered by the primary sphere. The use of these additional spheres results in added complication to the forward model. Using high-resolution spatial information obtained via X-ray CT or MR imaging, a realistic head model can be formed by tessellating the head into a set of contiguous regions (typically the scalp, outer skull, and inner skull surfaces). Since accurate in vivo determination of internal conductivities is currently not currently possible, the head is typically assumed to consist of a set of contiguous isotropic regions, each with constant conductivity.

    18. Integrated modeling of CO2 storage and leakage scenarios including transitions between super- and sub-critical conditions, and phase change between liquid and gaseous CO2

      SciTech Connect (OSTI)

      Pruess, K.

      2011-05-15

      Storage of CO{sub 2} in saline aquifers is intended to be at supercritical pressure and temperature conditions, but CO{sub 2} leaking from a geologic storage reservoir and migrating toward the land surface (through faults, fractures, or improperly abandoned wells) would reach subcritical conditions at depths shallower than 500-750 m. At these and shallower depths, subcritical CO{sub 2} can form two-phase mixtures of liquid and gaseous CO{sub 2}, with significant latent heat effects during boiling and condensation. Additional strongly non-isothermal effects can arise from decompression of gas-like subcritical CO{sub 2}, the so-called Joule-Thomson effect. Integrated modeling of CO{sub 2} storage and leakage requires the ability to model non-isothermal flows of brine and CO{sub 2} at conditions that range from supercritical to subcritical, including three-phase flow of aqueous phase, and both liquid and gaseous CO{sub 2}. In this paper, we describe and demonstrate comprehensive simulation capabilities that can cope with all possible phase conditions in brine-CO{sub 2} systems. Our model formulation includes: (1) an accurate description of thermophysical properties of aqueous and CO{sub 2}-rich phases as functions of temperature, pressure, salinity and CO{sub 2} content, including the mutual dissolution of CO{sub 2} and H{sub 2}O; (2) transitions between super- and subcritical conditions, including phase change between liquid and gaseous CO{sub 2}; (3) one-, two-, and three-phase flow of brine-CO{sub 2} mixtures, including heat flow; (4) non-isothermal effects associated with phase change, mutual dissolution of CO{sub 2} and water, and (de-) compression effects; and (5) the effects of dissolved NaCl, and the possibility of precipitating solid halite, with associated porosity and permeability change. Applications to specific leakage scenarios demonstrate that the peculiar thermophysical properties of CO{sub 2} provide a potential for positive as well as negative feedbacks on leakage rates, with a combination of self-enhancing and self-limiting effects. Lower viscosity and density of CO{sub 2} as compared to aqueous fluids provides a potential for self-enhancing effects during leakage, while strong cooling effects from liquid CO{sub 2} boiling into gas, and from expansion of gas rising towards the land surface, act to self-limit discharges. Strong interference between fluid phases under three-phase conditions (aqueous - liquid CO{sub 2} - gaseous CO{sub 2}) also tends to reduce CO{sub 2} fluxes. Feedback on different space and time scales can induce non-monotonic behavior of CO{sub 2} flow rates.

    19. Computational Fluid Dynamics (CFD) Modeling for High Rate Pulverized Coal Injection (PCI) into the Blast Furnace

      SciTech Connect (OSTI)

      Dr. Chenn Zhou

      2008-10-15

      Pulverized coal injection (PCI) into the blast furnace (BF) has been recognized as an effective way to decrease the coke and total energy consumption along with minimization of environmental impacts. However, increasing the amount of coal injected into the BF is currently limited by the lack of knowledge of some issues related to the process. It is therefore important to understand the complex physical and chemical phenomena in the PCI process. Due to the difficulty in attaining trus BF measurements, Computational fluid dynamics (CFD) modeling has been identified as a useful technology to provide such knowledge. CFD simulation is powerful for providing detailed information on flow properties and performing parametric studies for process design and optimization. In this project, comprehensive 3-D CFD models have been developed to simulate the PCI process under actual furnace conditions. These models provide raceway size and flow property distributions. The results have provided guidance for optimizing the PCI process.

    20. Computational Model of Population Dynamics Based on the Cell Cycle and Local Interactions

      SciTech Connect (OSTI)

      Oprisan, Sorinel Adrian; Oprisan, Ana

      2005-03-31

      Our study bridges cellular (mesoscopic) level interactions and global population (macroscopic) dynamics of carcinoma. The morphological differences and transitions between well and smooth defined benign tumors and tentacular malignat tumors suggest a theoretical analysis of tumor invasion based on the development of mathematical models exhibiting bifurcations of spatial patterns in the density of tumor cells. Our computational model views the most representative and clinically relevant features of oncogenesis as a fight between two distinct sub-systems: the immune system of the host and the neoplastic system. We implemented the neoplastic sub-system using a three-stage cell cycle: active, dormant, and necrosis. The second considered sub-system consists of cytotoxic active (effector) cells -- EC, with a very broad phenotype ranging from NK cells to CTL cells, macrophages, etc. Based on extensive numerical simulations, we correlated the fractal dimensions for carcinoma, which could be obtained from tumor imaging, with the malignat stage. Our computational model was able to also simulate the effects of surgical, chemotherapeutical, and radiotherapeutical treatments.

    1. Enabling a Highly-Scalable Global Address Space Model for Petascale Computing

      SciTech Connect (OSTI)

      Apra, Edoardo; Vetter, Jeffrey S; Yu, Weikuan

      2010-01-01

      Over the past decade, the trajectory to the petascale has been built on increased complexity and scale of the underlying parallel architectures. Meanwhile, software de- velopers have struggled to provide tools that maintain the productivity of computational science teams using these new systems. In this regard, Global Address Space (GAS) programming models provide a straightforward and easy to use addressing model, which can lead to improved produc- tivity. However, the scalability of GAS depends directly on the design and implementation of the runtime system on the target petascale distributed-memory architecture. In this paper, we describe the design, implementation, and optimization of the Aggregate Remote Memory Copy Interface (ARMCI) runtime library on the Cray XT5 2.3 PetaFLOPs computer at Oak Ridge National Laboratory. We optimized our implementation with the flow intimation technique that we have introduced in this paper. Our optimized ARMCI implementation improves scalability of both the Global Arrays (GA) programming model and a real-world chemistry application NWChem from small jobs up through 180,000 cores.

    2. Swelling in light water reactor internal components: Insights from computational modeling

      SciTech Connect (OSTI)

      Stoller, Roger E.; Barashev, Alexander V.; Golubov, Stanislav I.

      2015-08-01

      A modern cluster dynamics model has been used to investigate the materials and irradiation parameters that control microstructural evolution under the relatively low-temperature exposure conditions that are representative of the operating environment for in-core light water reactor components. The focus is on components fabricated from austenitic stainless steel. The model accounts for the synergistic interaction between radiation-produced vacancies and the helium that is produced by nuclear transmutation reactions. Cavity nucleation rates are shown to be relatively high in this temperature regime (275 to 325C), but are sensitive to assumptions about the fine scale microstructure produced under low-temperature irradiation. The cavity nucleation rates observed run counter to the expectation that void swelling would not occur under these conditions. This expectation was based on previous research on void swelling in austenitic steels in fast reactors. This misleading impression arose primarily from an absence of relevant data. The results of the computational modeling are generally consistent with recent data obtained by examining ex-service components. However, it has been shown that the sensitivity of the model s predictions of low-temperature swelling behavior to assumptions about the primary damage source term and specification of the mean-field sink strengths is somewhat greater that that observed at higher temperatures. Further assessment of the mathematical model is underway to meet the long-term objective of this research, which is to provide a predictive model of void swelling at relevant lifetime exposures to support extended reactor operations.

    3. New Set of Computational Tools and Models Expected to Help Enable Rapid Development and Deployment of Carbon Capture Technologies

      Broader source: Energy.gov [DOE]

      An eagerly anticipated suite of 21 computational tools and models to help enable rapid development and deployment of new carbon capture technologies is now available from the Carbon Capture Simulation Initiative.

    4. In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation

      SciTech Connect (OSTI)

      G. R. Odette; G. E. Lucas

      2005-11-15

      This final report on "In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation" (DE-FG03-01ER54632) consists of a series of summaries of work that has been published, or presented at meetings, or both. It briefly describes results on the following topics: 1) A Transport and Fate Model for Helium and Helium Management; 2) Atomistic Studies of Point Defect Energetics, Dynamics and Interactions; 3) Multiscale Modeling of Fracture consisting of: 3a) A Micromechanical Model of the Master Curve (MC) Universal Fracture Toughness-Temperature Curve Relation, KJc(T - To), 3b) An Embrittlement DTo Prediction Model for the Irradiation Hardening Dominated Regime, 3c) Non-hardening Irradiation Assisted Thermal and Helium Embrittlement of 8Cr Tempered Martensitic Steels: Compilation and Analysis of Existing Data, 3d) A Model for the KJc(T) of a High Strength NFA MA957, 3e) Cracked Body Size and Geometry Effects of Measured and Effective Fracture Toughness-Model Based MC and To Evaluations of F82H and Eurofer 97, 3-f) Size and Geometry Effects on the Effective Toughness of Cracked Fusion Structures; 4) Modeling the Multiscale Mechanics of Flow Localization-Ductility Loss in Irradiation Damaged BCC Alloys; and 5) A Universal Relation Between Indentation Hardness and True Stress-Strain Constitutive Behavior. Further details can be found in the cited references or presentations that generally can be accessed on the internet, or provided upon request to the authors. Finally, it is noted that this effort was integrated with our base program in fusion materials, also funded by the DOE OFES.

    5. Computational fluid dynamics modeling of two-phase flow in a BWR fuel assembly. Final CRADA Report.

      SciTech Connect (OSTI)

      Tentner, A.; Nuclear Engineering Division

      2009-10-13

      A direct numerical simulation capability for two-phase flows with heat transfer in complex geometries can considerably reduce the hardware development cycle, facilitate the optimization and reduce the costs of testing of various industrial facilities, such as nuclear power plants, steam generators, steam condensers, liquid cooling systems, heat exchangers, distillers, and boilers. Specifically, the phenomena occurring in a two-phase coolant flow in a BWR (Boiling Water Reactor) fuel assembly include coolant phase changes and multiple flow regimes which directly influence the coolant interaction with fuel assembly and, ultimately, the reactor performance. Traditionally, the best analysis tools for this purpose of two-phase flow phenomena inside the BWR fuel assembly have been the sub-channel codes. However, the resolution of these codes is too coarse for analyzing the detailed intra-assembly flow patterns, such as flow around a spacer element. Advanced CFD (Computational Fluid Dynamics) codes provide a potential for detailed 3D simulations of coolant flow inside a fuel assembly, including flow around a spacer element using more fundamental physical models of flow regimes and phase interactions than sub-channel codes. Such models can extend the code applicability to a wider range of situations, which is highly important for increasing the efficiency and to prevent accidents.

    6. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    7. Subsurface Multiphase Flow and Multicomponent Reactive Transport Modeling using High-Performance Computing

      SciTech Connect (OSTI)

      Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

      2007-07-16

      Numerical modeling has become a critical tool to the U.S. Department of Energy for evaluating the environmental impact of alternative energy sources and remediation strategies for legacy waste sites. Unfortunately, the physical and chemical complexity of many sites overwhelms the capabilities of even most state of the art groundwater models. Of particular concern are the representation of highly-heterogeneous stratified rock/soil layers in the subsurface and the biological and geochemical interactions of chemical species within multiple fluid phases. Clearly, there is a need for higher-resolution modeling (i.e. more spatial, temporal, and chemical degrees of freedom) and increasingly mechanistic descriptions of subsurface physicochemical processes. We present SciDAC-funded research being performed in the development of PFLOTRAN, a parallel multiphase flow and multicomponent reactive transport model. Written in Fortran90, PFLOTRAN is founded upon PETSc data structures and solvers. We are employing PFLOTRAN in the simulation of uranium transport at the Hanford 300 Area, a contaminated site of major concern to the Department of Energy, the State of Washington, and other government agencies. By leveraging the billions of degrees of freedom available through high-performance computation using tens of thousands of processors, we can better characterize the release of uranium into groundwater and its subsequent transport to the Columbia River, and thereby better understand and evaluate the effectiveness of various proposed remediation strategies.

    8. Pump apparatus including deconsolidator

      DOE Patents [OSTI]

      Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew

      2014-10-07

      A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.

    9. Extraction of actinides by multi-dentate diamides and their evaluation with computational molecular modeling

      SciTech Connect (OSTI)

      Sasaki, Y.; Kitatsuji, Y.; Hirata, M.; Kimura, T.; Yoshizuka, K.

      2008-07-01

      Multi-dentate diamides have been synthesized and examined for actinide (An) extractions. Bi- and tridentate extractants are the focus in this work. The extraction of actinides was performed from 0.1-6 M HNO{sub 3} to organic solvents. It was obvious that N,N,N',N'-tetra-alkyl-diglycolamide (DGA) derivatives, 2,2'-(methylimino)bis(N,N-dioctyl-acetamide) (MIDOA), and N,N'-dimethyl-N,N'-dioctyl-2-(3-oxa-pentadecane)-malonamide (DMDOOPDMA) have relatively high D values (D(Pu) > 70). The following notable results using DGA extractants were obtained: (1) DGAs with short alkyl chains give higher D values than those with long alkyl chain, (2) DGAs with long alkyl chain have high solubility in n-dodecane. Computational molecular modeling was also used to elucidate the effects of structural and electronic properties of the reagents on their different extractabilities. (authors)

    10. Introduction to Focus Issue: Rhythms and Dynamic Transitions in Neurological Disease: Modeling, Computation, and Experiment

      SciTech Connect (OSTI)

      Kaper, Tasso J. Kramer, Mark A.; Rotstein, Horacio G.

      2013-12-15

      Rhythmic neuronal oscillations across a broad range of frequencies, as well as spatiotemporal phenomena, such as waves and bumps, have been observed in various areas of the brain and proposed as critical to brain function. While there is a long and distinguished history of studying rhythms in nerve cells and neuronal networks in healthy organisms, the association and analysis of rhythms to diseases are more recent developments. Indeed, it is now thought that certain aspects of diseases of the nervous system, such as epilepsy, schizophrenia, Parkinson's, and sleep disorders, are associated with transitions or disruptions of neurological rhythms. This focus issue brings together articles presenting modeling, computational, analytical, and experimental perspectives about rhythms and dynamic transitions between them that are associated to various diseases.

    11. Subsurface Multiphase Flow and Multicomponent Reactive Transport Modeling using High-Performance Computing

      SciTech Connect (OSTI)

      Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

      2007-08-01

      Numerical modeling has become a critical tool to the Department of Energy for evaluating the environmental impact of alternative energy sources and remediation strategies for legacy waste sites. Unfortunately, the physical and chemical complexity of many sites overwhelms the capabilities of even most state of the art groundwater models. Of particular concern are the representation of highly-heterogeneous stratified rock/soil layers in the subsurface and the biological and geochemical interactions of chemical species within multiple fluid phases. Clearly, there is a need for higher-resolution modeling (i.e. more spatial, temporal, and chemical degrees of freedom) and increasingly mechanistic descriptions of subsurface physicochemical processes. We present research being performed in the development of PFLOTRAN, a parallel multiphase flow and multicomponent reactive transport model. Written in Fortran90, PFLOTRAN is founded upon PETSc data structures and solvers and has exhibited impressive strong scalability on up to 4000 processors on the ORNL Cray XT3. We are employing PFLOTRAN in the simulation of uranium transport at the Hanford 300 Area, a contaminated site of major concern to the Department of Energy, the State of Washington, and other government agencies where overly-simplistic historical modeling erroneously predicted decade removal times for uranium by ambient groundwater flow. By leveraging the billions of degrees of freedom available through high-performance computation using tens of thousands of processors, we can better characterize the release of uranium into groundwater and its subsequent transport to the Columbia River, and thereby better understand and evaluate the effectiveness of various proposed remediation strategies.

    12. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    13. Computational Nanophotonics: modeling optical interactions and transport in tailored nanosystem architectures

      SciTech Connect (OSTI)

      Schatz, George; Ratner, Mark

      2014-02-27

      This report describes research by George Schatz and Mark Ratner that was done over the period 10/03-5/09 at Northwestern University. This research project was part of a larger research project with the same title led by Stephen Gray at Argonne. A significant amount of our work involved collaborations with Gray, and there were many joint publications as summarized later. In addition, a lot of this work involved collaborations with experimental groups at Northwestern, Argonne, and elsewhere. The research was primarily concerned with developing theory and computational methods that can be used to describe the interaction of light with noble metal nanoparticles (especially silver) that are capable of plasmon excitation. Classical electrodynamics provides a powerful approach for performing these studies, so much of this research project involved the development of methods for solving Maxwell’s equations, including both linear and nonlinear effects, and examining a wide range of nanostructures, including particles, particle arrays, metal films, films with holes, and combinations of metal nanostructures with polymers and other dielectrics. In addition, our work broke new ground in the development of quantum mechanical methods to describe plasmonic effects based on the use of time dependent density functional theory, and we developed new theory concerned with the coupling of plasmons to electrical transport in molecular wire structures. Applications of our technology were aimed at the development of plasmonic devices as components of optoelectronic circuits, plasmons for spectroscopy applications, and plasmons for energy-related applications.

    14. Development of Computational Tools for Metabolic Model Curation, Flux Elucidation and Strain Design

      SciTech Connect (OSTI)

      Maranas, Costas D

      2012-05-21

      An overarching goal of the Department of Energy™ mission is the efficient deployment and engineering of microbial and plant systems to enable biomass conversion in pursuit of high energy density liquid biofuels. This has spurred the pace at which new organisms are sequenced and annotated. This torrent of genomic information has opened the door to understanding metabolism in not just skeletal pathways and a handful of microorganisms but for truly genome-scale reconstructions derived for hundreds of microbes and plants. Understanding and redirecting metabolism is crucial because metabolic fluxes are unique descriptors of cellular physiology that directly assess the current cellular state and quantify the effect of genetic engineering interventions. At the same time, however, trying to keep pace with the rate of genomic data generation has ushered in a number of modeling and computational challenges related to (i) the automated assembly, testing and correction of genome-scale metabolic models, (ii) metabolic flux elucidation using labeled isotopes, and (iii) comprehensive identification of engineering interventions leading to the desired metabolism redirection.

    15. Wind Turbine Modeling for Computational Fluid Dynamics: December 2010 - December 2012

      SciTech Connect (OSTI)

      Tossas, L. A. M.; Leonardi, S.

      2013-07-01

      With the shortage of fossil fuel and the increasing environmental awareness, wind energy is becoming more and more important. As the market for wind energy grows, wind turbines and wind farms are becoming larger. Current utility-scale turbines extend a significant distance into the atmospheric boundary layer. Therefore, the interaction between the atmospheric boundary layer and the turbines and their wakes needs to be better understood. The turbulent wakes of upstream turbines affect the flow field of the turbines behind them, decreasing power production and increasing mechanical loading. With a better understanding of this type of flow, wind farm developers could plan better-performing, less maintenance-intensive wind farms. Simulating this flow using computational fluid dynamics is one important way to gain a better understanding of wind farm flows. In this study, we compare the performance of actuator disc and actuator line models in producing wind turbine wakes and the wake-turbine interaction between multiple turbines. We also examine parameters that affect the performance of these models, such as grid resolution, the use of a tip-loss correction, and the way in which the turbine force is projected onto the flow field.

    16. Computational Nanophotonics: Model Optical Interactions and Transport in Tailored Nanosystem Architectures

      SciTech Connect (OSTI)

      Stockman, Mark; Gray, Steven

      2014-02-21

      The program is directed toward development of new computational approaches to photoprocesses in nanostructures whose geometry and composition are tailored to obtain desirable optical responses. The emphasis of this specific program is on the development of computational methods and prediction and computational theory of new phenomena of optical energy transfer and transformation on the extreme nanoscale (down to a few nanometers).

    17. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

      SciTech Connect (OSTI)

      Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

      2006-10-01

      Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

    18. DEVELOPMENT OF A COMPUTATIONAL MULTIPHASE FLOW MODEL FOR FISCHER TROPSCH SYNTHESIS IN A SLURRY BUBBLE COLUMN REACTOR

      SciTech Connect (OSTI)

      Donna Post Guillen; Tami Grimmett; Anastasia M. Gribik; Steven P. Antal

      2010-09-01

      The Hybrid Energy Systems Testing (HYTEST) Laboratory is being established at the Idaho National Laboratory to develop and test hybrid energy systems with the principal objective to safeguard U.S. Energy Security by reducing dependence on foreign petroleum. A central component of the HYTEST is the slurry bubble column reactor (SBCR) in which the gas-to-liquid reactions will be performed to synthesize transportation fuels using the Fischer Tropsch (FT) process. SBCRs are cylindrical vessels in which gaseous reactants (for example, synthesis gas or syngas) is sparged into a slurry of liquid reaction products and finely dispersed catalyst particles. The catalyst particles are suspended in the slurry by the rising gas bubbles and serve to promote the chemical reaction that converts syngas to a spectrum of longer chain hydrocarbon products, which can be upgraded to gasoline, diesel or jet fuel. These SBCRs operate in the churn-turbulent flow regime which is characterized by complex hydrodynamics, coupled with reacting flow chemistry and heat transfer, that effect reactor performance. The purpose of this work is to develop a computational multiphase fluid dynamic (CMFD) model to aid in understanding the physico-chemical processes occurring in the SBCR. Our team is developing a robust methodology to couple reaction kinetics and mass transfer into a four-field model (consisting of the bulk liquid, small bubbles, large bubbles and solid catalyst particles) that includes twelve species: (1) CO reactant, (2) H2 reactant, (3) hydrocarbon product, and (4) H2O product in small bubbles, large bubbles, and the bulk fluid. Properties of the hydrocarbon product were specified by vapor liquid equilibrium calculations. The absorption and kinetic models, specifically changes in species concentrations, have been incorporated into the mass continuity equation. The reaction rate is determined based on the macrokinetic model for a cobalt catalyst developed by Yates and Satterfield [1]. The model includes heat generation due to the exothermic chemical reaction, as well as heat removal from a constant temperature heat exchanger. Results of the CMFD simulations (similar to those shown in Figure 1) will be presented.

    19. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.

    20. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

      SciTech Connect (OSTI)

      Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei

      2015-01-15

      Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm, and 1.06 ± 0.40 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the premolar were 37.95 ± 10.13 mm{sup 3}, 92.45 ± 2.29%, 0.29 ± 0.06 mm, 0.33 ± 0.10 mm, and 1.28 ± 0.72 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the molar were 52.38 ± 17.27 mm{sup 3}, 94.12 ± 1.38%, 0.30 ± 0.08 mm, 0.35 ± 0.17 mm, and 1.52 ± 0.75 mm, respectively. The computation time of the proposed method for segmenting CBCT images of one subject was 7.25 ± 0.73 min. Compared with two other methods, the proposed method achieves significant improvement in terms of accuracy. Conclusions: The presented tooth segmentation method can be used to segment tooth contours from CT images accurately and efficiently.

    1. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

      SciTech Connect (OSTI)

      Joseph, Earl C.; Conway, Steve; Dekate, Chirag

      2013-09-30

      This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size.  A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

    2. Computational Modeling of Fluid Flow through a Fracture in Permeable Rock

      SciTech Connect (OSTI)

      Crandall, Dustin; Ahmadi, Goodarz; Smith, Duane H

      2010-01-01

      Laminar, single-phase, finite-volume solutions to the NavierStokes equations of fluid flow through a fracture within permeable media have been obtained. The fracture geometry was acquired from computed tomography scans of a fracture in Berea sandstone, capturing the small-scale roughness of these natural fluid conduits. First, the roughness of the two-dimensional fracture profiles was analyzed and shown to be similar to Brownian fractal structures. The permeability and tortuosity of each fracture profile was determined from simulations of fluid flow through these geometries with impermeable fracture walls. A surrounding permeable medium, assumed to obey Darcys Law with permeabilities from 0.2 to 2,000 millidarcies, was then included in the analysis. A series of simulations for flows in fractured permeable rocks was performed, and the results were used to develop a relationship between the flow rate and pressure loss for fractures in porous rocks. The resulting frictionfactor, which accounts for the fracture geometric properties, is similar to the cubic law; it has the potential to be of use in discrete fracture reservoir-scale simulations of fluid flow through highly fractured geologic formations with appreciable matrix permeability. The observed fluid flow from the surrounding permeable medium to the fracture was significant when the resistance within the fracture and the medium were of the same order. An increase in the volumetric flow rate within the fracture profile increased by more than 5% was observed for flows within high permeability-fractured porous media.

    3. Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop

      SciTech Connect (OSTI)

      Musial, W.; Lawson, M.; Rooney, S.

      2013-02-01

      The Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop was hosted by the National Renewable Energy Laboratory (NREL) in Broomfield, Colorado, July 9–10, 2012. The workshop brought together over 60 experts in marine energy technologies to disseminate technical information to the marine energy community, and to collect information to help identify ways in which the development of a commercially viable marine energy industry can be accelerated. The workshop was comprised of plenary sessions that reviewed the state of the marine energy industry and technical sessions that covered specific topics of relevance. Each session consisted of presentations, followed by facilitated discussions. During the facilitated discussions, the session chairs posed several prepared questions to the presenters and audience to encourage communication and the exchange of ideas between technical experts. Following the workshop, attendees were asked to provide written feedback on their takeaways from the workshop and their best ideas on how to accelerate the pace of marine energy technology development. The first four sections of this document give a general overview of the workshop format, provide presentation abstracts, supply discussion session notes, and list responses to the post-workshop questions. The final section presents key findings and conclusions from the workshop that suggest what the most pressing MHK technology needs are and how the U.S. Department of Energy (DOE) and national laboratory resources can be utilized to assist the marine energy industry in the most effective manner.

    4. Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop

      SciTech Connect (OSTI)

      Musial, W.; Lawson, M.; Rooney, S.

      2013-02-01

      The Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop was hosted by the National Renewable Energy Laboratory (NREL) in Broomfield, Colorado, July 9-10, 2012. The workshop brought together over 60 experts in marine energy technologies to disseminate technical information to the marine energy community and collect information to help identify ways in which the development of a commercially viable marine energy industry can be accelerated. The workshop was comprised of plenary sessions that reviewed the state of the marine energy industry and technical sessions that covered specific topics of relevance. Each session consisted of presentations, followed by facilitated discussions. During the facilitated discussions, the session chairs posed several prepared questions to the presenters and audience to encourage communication and the exchange of ideas between technical experts. Following the workshop, attendees were asked to provide written feedback on their takeaways and their best ideas on how to accelerate the pace of marine energy technology development. The first four sections of this document give a general overview of the workshop format, provide presentation abstracts and discussion session notes, and list responses to the post-workshop questions. The final section presents key findings and conclusions from the workshop that suggest how the U.S. Department of Energy and national laboratory resources can be utilized to most effectively assist the marine energy industry.

    5. Transfer matrix computation of critical polynomials for two-dimensional Potts models

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Jacobsen, Jesper Lykke; Scullard, Christian R.

      2013-02-04

      We showed, In our previous work, that critical manifolds of the q-state Potts model can be studied by means of a graph polynomial PB(q, v), henceforth referred to as the critical polynomial. This polynomial may be defined on any periodic two-dimensional lattice. It depends on a finite subgraph B, called the basis, and the manner in which B is tiled to construct the lattice. The real roots v = eK — 1 of PB(q, v) either give the exact critical points for the lattice, or provide approximations that, in principle, can be made arbitrarily accurate by increasing the size ofmore » B in an appropriate way. In earlier work, PB(q, v) was defined by a contraction-deletion identity, similar to that satisfied by the Tutte polynomial. Here, we give a probabilistic definition of PB(q, v), which facilitates its computation, using the transfer matrix, on much larger B than was previously possible.We present results for the critical polynomial on the (4, 82), kagome, and (3, 122) lattices for bases of up to respectively 96, 162, and 243 edges, compared to the limit of 36 edges with contraction-deletion. We discuss in detail the role of the symmetries and the embedding of B. The critical temperatures vc obtained for ferromagnetic (v > 0) Potts models are at least as precise as the best available results from Monte Carlo simulations or series expansions. For instance, with q = 3 we obtain vc(4, 82) = 3.742 489 (4), vc(kagome) = 1.876 459 7 (2), and vc(3, 122) = 5.033 078 49 (4), the precision being comparable or superior to the best simulation results. More generally, we trace the critical manifolds in the real (q, v) plane and discuss the intricate structure of the phase diagram in the antiferromagnetic (v < 0) region.« less

    6. Computer modeling of electrical and thermal performance during bipolar pulsed radiofrequency for pain relief

      SciTech Connect (OSTI)

      Pérez, Juan J.; Pérez-Cajaraville, Juan J.; Muñoz, Víctor; Berjano, Enrique

      2014-07-15

      Purpose: Pulsed RF (PRF) is a nonablative technique for treating neuropathic pain. Bipolar PRF application is currently aimed at creating a “strip lesion” to connect the electrode tips; however, the electrical and thermal performance during bipolar PRF is currently unknown. The objective of this paper was to study the temperature and electric field distributions during bipolar PRF. Methods: The authors developed computer models to study temperature and electric field distributions during bipolar PRF and to assess the possible ablative thermal effect caused by the accumulated temperature spikes, along with any possible electroporation effects caused by the electrical field. The authors also modeled the bipolar ablative mode, known as bipolar Continuous Radiofrequency (CRF), in order to compare both techniques. Results: There were important differences between CRF and PRF in terms of electrical and thermal performance. In bipolar CRF: (1) the initial temperature of the tissue impacts on temperature progress and hence on the thermal lesion dimension; and (2) at 37 °C, 6-min of bipolar CRF creates a strip thermal lesion between the electrodes when these are separated by a distance of up to 20 mm. In bipolar PRF: (1) an interelectrode distance shorter than 5 mm produces thermal damage (i.e., ablative effect) in the intervening tissue after 6 min of bipolar RF; and (2) the possible electroporation effect (electric fields higher than 150 kV m{sup −1}) would be exclusively circumscribed to a very small zone of tissue around the electrode tip. Conclusions: The results suggest that (1) the clinical parameters considered to be suitable for bipolar CRF should not necessarily be considered valid for bipolar PRF, and vice versa; and (2) the ablative effect of the CRF mode is mainly due to its much greater level of delivered energy than is the case in PRF, and therefore at same applied energy levels, CRF, and PRF are expected to result in same outcomes in terms of thermal damage zone dimension.

    7. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

      SciTech Connect (OSTI)

      Jablonowski, Christiane

      2015-07-14

      The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.

    8. KINETIC MODELING OF A FISCHER-TROPSCH REACTION OVER A COBALT CATALYST IN A SLURRY BUBBLE COLUMN REACTOR FOR INCORPORATION INTO A COMPUTATIONAL MULTIPHASE FLUID DYNAMICS MODEL

      SciTech Connect (OSTI)

      Anastasia Gribik; Doona Guillen, PhD; Daniel Ginosar, PhD

      2008-09-01

      Currently multi-tubular fixed bed reactors, fluidized bed reactors, and slurry bubble column reactors (SBCRs) are used in commercial Fischer Tropsch (FT) synthesis. There are a number of advantages of the SBCR compared to fixed and fluidized bed reactors. The main advantage of the SBCR is that temperature control and heat recovery are more easily achieved. The SBCR is a multiphase chemical reactor where a synthesis gas, comprised mainly of H2 and CO, is bubbled through a liquid hydrocarbon wax containing solid catalyst particles to produce specialty chemicals, lubricants, or fuels. The FT synthesis reaction is the polymerization of methylene groups [-(CH2)-] forming mainly linear alkanes and alkenes, ranging from methane to high molecular weight waxes. The Idaho National Laboratory is developing a computational multiphase fluid dynamics (CMFD) model of the FT process in a SBCR. This paper discusses the incorporation of absorption and reaction kinetics into the current hydrodynamic model. A phased approach for incorporation of the reaction kinetics into a CMFD model is presented here. Initially, a simple kinetic model is coupled to the hydrodynamic model, with increasing levels of complexity added in stages. The first phase of the model includes incorporation of the absorption of gas species from both large and small bubbles into the bulk liquid phase. The driving force for the gas across the gas liquid interface into the bulk liquid is dependent upon the interfacial gas concentration in both small and large bubbles. However, because it is difficult to measure the concentration at the gas-liquid interface, coefficients for convective mass transfer have been developed for the overall driving force between the bulk concentrations in the gas and liquid phases. It is assumed that there are no temperature effects from mass transfer of the gas phases to the bulk liquid phase, since there are only small amounts of dissolved gas in the liquid phase. The product from the incorporation of absorption is the steady state concentration profile of the absorbed gas species in the bulk liquid phase. The second phase of the model incorporates a simplified macrokinetic model to the mass balance equation in the CMFD code. Initially, the model assumes that the catalyst particles are sufficiently small such that external and internal mass and heat transfer are not rate limiting. The model is developed utilizing the macrokinetic rate expression developed by Yates and Satterfield (1991). Initially, the model assumes that the only species formed other than water in the FT reaction is C27H56. Change in moles of the reacting species and the resulting temperature of the catalyst and fluid phases is solved simultaneously. The macrokinetic model is solved in conjunction with the species transport equations in a separate module which is incorporated into the CMFD code.

    9. Computing and Computational Sciences Directorate - Computer Science and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics Division Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Our mission includes basic research in computational sciences and application of advanced computing systems, computational, mathematical and analysis techniques to the solution of scientific problems of national importance. We seek to work

    10. International Nuclear Energy Research Initiative Development of Computational Models for Pyrochemical Electrorefiners of Nuclear Waste Transmutation Systems

      SciTech Connect (OSTI)

      M.F. Simpson; K.-R. Kim

      2010-12-01

      In support of closing the nuclear fuel cycle using non-aqueous separations technology, this project aims to develop computational models of electrorefiners based on fundamental chemical and physical processes. Spent driver fuel from Experimental Breeder Reactor-II (EBR-II) is currently being electrorefined in the Fuel Conditioning Facility (FCF) at Idaho National Laboratory (INL). And Korea Atomic Energy Research Institute (KAERI) is developing electrorefining technology for future application to spent fuel treatment and management in the Republic of Korea (ROK). Electrorefining is a critical component of pyroprocessing, a non-aqueous chemical process which separates spent fuel into four streams: (1) uranium metal, (2) U/TRU metal, (3) metallic high-level waste containing cladding hulls and noble metal fission products, and (4) ceramic high-level waste containing sodium and active metal fission products. Having rigorous yet flexible electrorefiner models will facilitate process optimization and assist in trouble-shooting as necessary. To attain such models, INL/UI has focused on approaches to develop a computationally-light and portable two-dimensional (2D) model, while KAERI/SNU has investigated approaches to develop a computationally intensive three-dimensional (3D) model for detailed and fine-tuned simulation.

    11. Sandia National Laboratories: Advanced Simulation and Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ASC Advanced Simulation and Computing Computational Systems & Software Environment Crack Modeling The Computational Systems & Software Environment program builds integrated,...

    12. BLENDING STUDY FOR SRR SALT DISPOSITION INTEGRATION: TANK 50H SCALE-MODELING AND COMPUTER-MODELING FOR BLENDING PUMP DESIGN, PHASE 2

      SciTech Connect (OSTI)

      Leishear, R.; Poirier, M.; Fowley, M.

      2011-05-26

      The Salt Disposition Integration (SDI) portfolio of projects provides the infrastructure within existing Liquid Waste facilities to support the startup and long term operation of the Salt Waste Processing Facility (SWPF). Within SDI, the Blend and Feed Project will equip existing waste tanks in the Tank Farms to serve as Blend Tanks where 300,000-800,000 gallons of salt solution will be blended in 1.3 million gallon tanks and qualified for use as feedstock for SWPF. Blending requires the miscible salt solutions from potentially multiple source tanks per batch to be well mixed without disturbing settled sludge solids that may be present in a Blend Tank. Disturbing solids may be problematic both from a feed quality perspective as well as from a process safety perspective where hydrogen release from the sludge is a potential flammability concern. To develop the necessary technical basis for the design and operation of blending equipment, Savannah River National Laboratory (SRNL) completed scaled blending and transfer pump tests and computational fluid dynamics (CFD) modeling. A 94 inch diameter pilot-scale blending tank, including tank internals such as the blending pump, transfer pump, removable cooling coils, and center column, were used in this research. The test tank represents a 1/10.85 scaled version of an 85 foot diameter, Type IIIA, nuclear waste tank that may be typical of Blend Tanks used in SDI. Specifically, Tank 50 was selected as the tank to be modeled per the SRR, Project Engineering Manager. SRNL blending tests investigated various fixed position, non-rotating, dual nozzle pump designs, including a blending pump model provided by the blend pump vendor, Curtiss Wright (CW). Primary research goals were to assess blending times and to evaluate incipient sludge disturbance for waste tanks. Incipient sludge disturbance was defined by SRR and SRNL as minor blending of settled sludge from the tank bottom into suspension due to blending pump operation, where the sludge level was shown to remain constant. To experimentally model the sludge layer, a very thin, pourable, sludge simulant was conservatively used for all testing. To experimentally model the liquid, supernate layer above the sludge in waste tanks, two salt solution simulants were used, which provided a bounding range of supernate properties. One solution was water (H{sub 2}O + NaOH), and the other was an inhibited, more viscous salt solution. The research performed and data obtained significantly advances the understanding of fluid mechanics, mixing theory and CFD modeling for nuclear waste tanks by benchmarking CFD results to actual experimental data. This research significantly bridges the gap between previous CFD models and actual field experiences in real waste tanks. A finding of the 2009, DOE, Slurry Retrieval, Pipeline Transport and Plugging, and Mixing Workshop was that CFD models were inadequate to assess blending processes in nuclear waste tanks. One recommendation from that Workshop was that a validation, or bench marking program be performed for CFD modeling versus experiment. This research provided experimental data to validate and correct CFD models as they apply to mixing and blending in nuclear waste tanks. Extensive SDI research was a significant step toward bench marking and applying CFD modeling. This research showed that CFD models not only agreed with experiment, but demonstrated that the large variance in actual experimental data accounts for misunderstood discrepancies between CFD models and experiments. Having documented this finding, SRNL was able to provide correction factors to be used with CFD models to statistically bound full scale CFD results. Through the use of pilot scale tests performed for both types of pumps and available engineering literature, SRNL demonstrated how to effectively apply CFD results to salt batch mixing in full scale waste tanks. In other words, CFD models were in error prior to development of experimental correction factors determined during this research, which provided a technique to use CFD models for salt batch mixing and transfer pump operations. This major scientific advance in mixing technology resulted in multi-million dollar cost savings to SRR. New techniques were developed for both experiment and analysis to complete this research. Supporting this success, research findings are summarized in the Conclusions section of this report, and technical recommendations for design and operation are included in this section of the report.

    13. Overview of Computer-Aided Engineering of Batteries and Introduction to Multi-Scale, Multi-Dimensional Modeling of Li-Ion Batteries (Presentation)

      SciTech Connect (OSTI)

      Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.; Lee, K. J.

      2012-05-01

      This 2012 Annual Merit Review presentation gives an overview of the Computer-Aided Engineering of Batteries (CAEBAT) project and introduces the Multi-Scale, Multi-Dimensional model for modeling lithium-ion batteries for electric vehicles.

    14. Computing Resources | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Computing Resources Theory and Computing Sciences Building Argonne's Theory and Computing Sciences (TCS) building houses a wide variety of computing systems including some of the most powerful supercomputers in the world. The facility has 25,000 square feet of raised computer floor space and a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. The building also

    15. Inline CBET Model Including SRS Backscatter

      SciTech Connect (OSTI)

      Bailey, David S.

      2015-06-26

      Cross-beam energy transfer (CBET) has been used as a tool on the National Ignition Facility (NIF) since the first energetics experiments in 2009 to control the energy deposition in ignition hohlraums and tune the implosion symmetry. As large amounts of power are transferred between laser beams at the entrance holes of NIF hohlraums, the presence of many overlapping beat waves can lead to stochastic ion heating in the regions where laser beams overlap [P. Michel et al., Phys. Rev. Lett. 109, 195004 (2012)]. Using the CBET gains derived in this paper, we show how to implement these equations in a ray-based laser source for a rad-hydro code.

    16. Coupling of Mechanical Behavior of Cell Components to Electrochemical-Thermal Models for Computer- Aided Engineering of Batteries under Abuse

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Coupling of Mechanical Behavior of Cell Components to Electrochemical-Thermal Models for Computer- Aided Engineering of Batteries under Abuse P.I.: Ahmad Pesaran Team: Tomasz Wierzbicki and Elham Sahraei (MIT) Genong Li and Lewis Collins (ANSYS) M. Sprague, G.H. Kim and S. Santhangopalan (NREL) June 17, 2014 This presentation does not contain any proprietary, confidential, or otherwise restricted information. Project ID: ES199 NREL/PR-5400-61885 2 Overview * Project Start: October 2013 * Project

    17. Risk and Vulnerability Assessment Using Cybernomic Computational Models: Tailored for Industrial Control Systems

      SciTech Connect (OSTI)

      Abercrombie, Robert K; Sheldon, Federick T.; Schlicher, Bob G

      2015-01-01

      There are many influencing economic factors to weigh from the defender-practitioner stakeholder point-of-view that involve cost combined with development/deployment models. Some examples include the cost of countermeasures themselves, the cost of training and the cost of maintenance. Meanwhile, we must better anticipate the total cost from a compromise. The return on investment in countermeasures is essentially impact costs (i.e., the costs from violating availability, integrity and confidentiality / privacy requirements). The natural question arises about choosing the main risks that must be mitigated/controlled and monitored in deciding where to focus security investments. To answer this question, we have investigated the cost/benefits to the attacker/defender to better estimate risk exposure. In doing so, it s important to develop a sound basis for estimating the factors that derive risk exposure, such as likelihood that a threat will emerge and whether it will be thwarted. This impact assessment framework can provide key information for ranking cybersecurity threats and managing risk.

    18. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      McClanahan, Richard; De Leon, Phillip L.

      2014-08-20

      The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

    19. A non-CFD modeling system for computing 3D wind and concentration fields in urban environments

      SciTech Connect (OSTI)

      Nelson, Matthew A; Brown, Michael J; Williams, Michael D; Gowardhan, Akshay; Pardyjak, Eric R

      2010-01-01

      The Quick Urban & Industrial Complex (QUIC) Dispersion Modeling System has been developed to rapidly compute the transport and dispersion of toxic agent releases in the vicinity of buildings. It is composed of an empirical-diagnostic wind solver, an 'urbanized' Lagrangian random-walk model, and a graphical user interface. The code has been used for homeland security and environmental air pollution applications. In this paper, we discuss the wind solver methodology and improvements made to the original Roeckle schemes in order to better capture flow fields in dense built-up areas. The mode1-computed wind and concentration fields are then compared to measurements from several field experiments. Improvements to the QUIC Dispersion Modeling System have been made to account for the inhomogeneous and complex building layouts found in large cities. The logic that has been introduced into the code is described and comparisons of model output to full-scale outdoor urban measurements in Oklahoma City and New York City are given. Although far from perfect, the model agreed fairly well with measurements and in many cases performed equally to CFD codes.

    20. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      New Project Is the ACME of Computer Science to Address Climate Change Analysis, Climate, Global Climate & Energy, Modeling, Modeling & Analysis, News, News & Events, Partnership New Project Is the ACME of Computer Science to Address Climate Change Sandia high-performance computing (HPC) researchers are working with DOE and 14 other national laboratories and institutions to develop and apply the most complete climate and Earth system model, to address the most challenging and

    1. Flow field computation of the NREL S809 airfoil using various turbulence models

      SciTech Connect (OSTI)

      Chang, Y.L.; Yang, S.L.; Arici, O. [Michigan Technological Univ., Houghton, MI (United States). Mechanical Engineering-Engineering Mechanics Dept.

      1996-10-01

      Performance comparison of three popular turbulence models, namely Baldwin-Lomas algebraic model, Chien`s Low-Reynolds-Number {kappa}-{epsilon} model, and Wilcox`s Low-Reynolds-Number {kappa}-{omega} model, is given. These models were applied to calculate the flow field around the National Renewable Energy Laboratory S809 airfoil using Total Variational Diminishing scheme. Numerical results of C{sub P}, C{sub L}, and C{sub D} are presented along with the Delft experimental data. It is shown that all three models perform well for attached flow, i.e., no flow separation at low angles of attack. However, at high angles of attack with flow separation, convergence characteristics show Wilcox`s model outperforms the other models. Results of this study will be used to guide the authors in their dynamic stall research.

    2. Application of high performance computing to automotive design and manufacturing: Composite materials modeling task technical manual for constitutive models for glass fiber-polymer matrix composites

      SciTech Connect (OSTI)

      Simunovic, S; Zacharia, T

      1997-11-01

      This report provides a theoretical background for three constitutive models for a continuous strand mat (CSM) glass fiber-thermoset polymer matrix composite. The models were developed during fiscal years 1994 through 1997 as a part of the Cooperative Research and Development Agreement, "Application of High-Performance Computing to Automotive Design and Manufacturing." The full derivation of constitutive relations in the framework of the continuum program DYNA3D and have been used for the simulation and impact analysis of CSM composite tubes. The analysis of simulation and experimental results show that the model based on strain tensor split yields the most accurate results of the three implemented models. The parameters used in the models and their derivation from the physical tests are documented.

    3. Making a Computer Model of the Most Complex System Ever Built - Continuum

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Magazine | NREL photo of a man and a woman pointing to a large computer screen, which shows an advanced systems analysis of U.S. power systems with examples of high area renewables. David Mooney, director for NREL's Strategic Energy Analysis Center (left), and Robin Newmark, NREL's associate director for Energy Analysis and Decision Support (right), examine an advanced systems analysis of the impacts of high penetrations of renewable energy on the U.S. electrical grid. Photo by Dennis

    4. CFD [computational fluid dynamics] And Safety Factors. Computer modeling of complex processes needs old-fashioned experiments to stay in touch with reality.

      SciTech Connect (OSTI)

      Leishear, Robert A.; Lee, Si Y.; Poirier, Michael R.; Steeper, Timothy J.; Ervin, Robert C.; Giddings, Billy J.; Stefanko, David B.; Harp, Keith D.; Fowley, Mark D.; Van Pelt, William B.

      2012-10-07

      Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a full-scale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A full-scale, low-flow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27-inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied significantly from the average.

    5. COMPARATIVE COMPUTATIONAL MODELING OF AIRFLOWS AND VAPOR DOSIMETY IN THE RESPIRATORY TRACTS OF RAT, MONKEY, AND HUMAN

      SciTech Connect (OSTI)

      Corley, Richard A.; Kabilan, Senthil; Kuprat, Andrew P.; Carson, James P.; Minard, Kevin R.; Jacob, Rick E.; Timchalk, Charles; Glenny, Robb W.; Pipavath, Sudhaker; Cox, Timothy C.; Wallis, Chris; Larson, Richard; Fanucchi, M.; Postlewait, Ed; Einstein, Daniel R.

      2012-07-01

      Coupling computational fluid dynamics (CFD) with physiologically based pharmacokinetic (PBPK) models is useful for predicting site-specific dosimetry of airborne materials in the respiratory tract and elucidating the importance of species differences in anatomy, physiology, and breathing patterns. Historically, these models were limited to discrete regions of the respiratory system. CFD/PBPK models have now been developed for the rat, monkey, and human that encompass airways from the nose or mouth to the lung. A PBPK model previously developed to describe acrolein uptake in nasal tissues was adapted to the extended airway models as an example application. Model parameters for each anatomic region were obtained from the literature, measured directly, or estimated from published data. Airflow and site-specific acrolein uptake patterns were determined under steadystate inhalation conditions to provide direct comparisons with prior data and nasalonly simulations. Results confirmed that regional uptake was dependent upon airflow rates and acrolein concentrations with nasal extraction efficiencies predicted to be greatest in the rat, followed by the monkey, then the human. For human oral-breathing simulations, acrolein uptake rates in oropharyngeal and laryngeal tissues were comparable to nasal tissues following nasal breathing under the same exposure conditions. For both breathing modes, higher uptake rates were predicted for lower tracheo-bronchial tissues of humans than either the rat or monkey. These extended airway models provide a unique foundation for comparing dosimetry across a significantly more extensive range of conducting airways in the rat, monkey, and human than prior CFD models.

    6. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

      SciTech Connect (OSTI)

      Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

      2015-01-01

      The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

    7. Computational modeling of electrostatic charge and fields produced by hypervelocity impact

      SciTech Connect (OSTI)

      Crawford, David A.

      2015-05-19

      Following prior experimental evidence of electrostatic charge separation, electric and magnetic fields produced by hypervelocity impact, we have developed a model of electrostatic charge separation based on plasma sheath theory and implemented it into the CTH shock physics code. Preliminary assessment of the model shows good qualitative and quantitative agreement between the model and prior experiments at least in the hypervelocity regime for the porous carbonate material tested. The model agrees with the scaling analysis of experimental data performed in the prior work, suggesting that electric charge separation and the resulting electric and magnetic fields can be a substantial effect at larger scales, higher impact velocities, or both.

    8. Computational model, method, and system for kinetically-tailoring multi-drug chemotherapy for individuals

      DOE Patents [OSTI]

      Gardner, Shea Nicole (San Leandro, CA)

      2007-10-23

      A method and system for tailoring treatment regimens to individual patients with diseased cells exhibiting evolution of resistance to such treatments. A mathematical model is provided which models rates of population change of proliferating and quiescent diseased cells using cell kinetics and evolution of resistance of the diseased cells, and pharmacokinetic and pharmacodynamic models. Cell kinetic parameters are obtained from an individual patient and applied to the mathematical model to solve for a plurality of treatment regimens, each having a quantitative efficacy value associated therewith. A treatment regimen may then be selected from the plurlaity of treatment options based on the efficacy value.

    9. Protein superfamily members as targets for computer modeling: The carbohydrate recognition domain of a macrophage lectin

      SciTech Connect (OSTI)

      Stenkamp, R.E.; Aruffo, A.; Bajorath, J.

      1996-12-31

      Members of protein superfamilies display similar folds, but share only limited sequence identity, often 25% or less. Thus, it is not straightforward to apply standard homology modeling methods to construct reliable three-dimensional models of such proteins. A three-dimensional model of the carbohydrate recognition domain of the rat macrophage lectin, a member of the calcium-dependent (C-type) lectin superfamily, has been generated to illustrate how information provided by comparison of X-ray structures and sequence-structure alignments can aid in comparative modeling when primary sequence similarities are low. 20 refs., 4 figs.

    10. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    11. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

      SciTech Connect (OSTI)

      Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.; Ivanov, K. N.

      2012-07-01

      A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied. (authors)

    12. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

      SciTech Connect (OSTI)

      Yock, Adam D. Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

      2014-05-15

      Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: ?11.6%23.8%) and 14.6% (range: ?7.3%27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: ?6.8%40.3%) and 13.1% (range: ?1.5%52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: ?11.1%20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

    13. Climate Models: Rob Jacob | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      --Tribology -Mathematics, computing, & computer science --Cloud computing --Modeling, simulation, & visualization --Petascale & exascale computing --Supercomputing &...

    14. Computational modeling of electrostatic charge and fields produced by hypervelocity impact

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Crawford, David A.

      2015-05-19

      Following prior experimental evidence of electrostatic charge separation, electric and magnetic fields produced by hypervelocity impact, we have developed a model of electrostatic charge separation based on plasma sheath theory and implemented it into the CTH shock physics code. Preliminary assessment of the model shows good qualitative and quantitative agreement between the model and prior experiments at least in the hypervelocity regime for the porous carbonate material tested. The model agrees with the scaling analysis of experimental data performed in the prior work, suggesting that electric charge separation and the resulting electric and magnetic fields can be a substantial effectmore » at larger scales, higher impact velocities, or both.« less

    15. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    16. Computational modeling of Krypton gas puffs with tailored mass density profiles on Z

      SciTech Connect (OSTI)

      Jennings, C. A.; Ampleford, D. J.; Lamppa, D. C.; Hansen, S. B.; Jones, B.; Harvey-Thompson, A. J.; Jobe, M.; Strizic, T.; Reneker, J.; Rochau, G. A.; Cuneo, M. E.

      2015-05-15

      Large diameter multi-shell gas puffs rapidly imploded by high current (?20 MA, ?100?ns) on the Z generator of Sandia National Laboratories are able to produce high-intensity Krypton K-shell emission at ?13?keV. Efficiently radiating at these high photon energies is a significant challenge which requires the careful design and optimization of the gas distribution. To facilitate this, we hydrodynamically model the gas flow out of the nozzle and then model its implosion using a 3-dimensional resistive, radiative MHD code (GORGON). This approach enables us to iterate between modeling the implosion and gas flow from the nozzle to optimize radiative output from this combined system. Guided by our implosion calculations, we have designed gas profiles that help mitigate disruption from Magneto-RayleighTaylor implosion instabilities, while preserving sufficient kinetic energy to thermalize to the high temperatures required for K-shell emission.

    17. Computational modeling of Krypton gas puffs with tailored mass density profiles on Z.

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Jennings, Christopher A.; Ampleford, David J.; Lamppa, Derek C.; Hansen, Stephanie B.; Jones, Brent Manley; Harvey-Thompson, Adam James; Jobe, Marc Ronald Lee; Reneker, Joseph; Rochau, Gregory A.; Cuneo, Michael Edward; et al

      2015-05-18

      Large diameter multi-shell gas puffs rapidly imploded by high current (~20 MA, ~100 ns) on the Z generator of Sandia National Laboratories are able to produce high-intensity Krypton K-shell emission at ~13 keV. Efficiently radiating at these high photon energies is a significant challenge which requires the careful design and optimization of the gas distribution. To facilitate this, we hydrodynamically model the gas flow out of the nozzle and then model its implosion using a 3-dimensional resistive, radiative MHD code (GORGON). This approach enables us to iterate between modeling the implosion and gas flow from the nozzle to optimize radiativemore » output from this combined system. Furthermore, guided by our implosion calculations, we have designed gas profiles that help mitigate disruption from Magneto-Rayleigh–Taylor implosion instabilities, while preserving sufficient kinetic energy to thermalize to the high temperatures required for K-shell emission.« less

    18. Computational modeling of Krypton gas puffs with tailored mass density profiles on Z.

      SciTech Connect (OSTI)

      Jennings, Christopher A.; Ampleford, David J.; Lamppa, Derek C.; Hansen, Stephanie B.; Jones, Brent Manley; Harvey-Thompson, Adam James; Jobe, Marc Ronald Lee; Reneker, Joseph; Rochau, Gregory A.; Cuneo, Michael Edward; Strizic, T.

      2015-05-18

      Large diameter multi-shell gas puffs rapidly imploded by high current (~20 MA, ~100 ns) on the Z generator of Sandia National Laboratories are able to produce high-intensity Krypton K-shell emission at ~13 keV. Efficiently radiating at these high photon energies is a significant challenge which requires the careful design and optimization of the gas distribution. To facilitate this, we hydrodynamically model the gas flow out of the nozzle and then model its implosion using a 3-dimensional resistive, radiative MHD code (GORGON). This approach enables us to iterate between modeling the implosion and gas flow from the nozzle to optimize radiative output from this combined system. Furthermore, guided by our implosion calculations, we have designed gas profiles that help mitigate disruption from Magneto-RayleighTaylor implosion instabilities, while preserving sufficient kinetic energy to thermalize to the high temperatures required for K-shell emission.

    19. Computational modeling of structure of metal matrix composite in centrifugal casting process

      SciTech Connect (OSTI)

      Zagorski, Roman [Department of Electrotechnology, Faculty of Materials Science and Metallurgy, Silesian University of Technology, ul. Krasinskiego 8, 40-019, Katowice (Poland)

      2007-04-07

      The structure of alumina matrix composite reinforced with crystalline particles obtained during centrifugal casting process are studied. Several parameters of cast process like pouring temperature, temperature, rotating speed and size of casting mould which influent on structure of composite are examined. Segregation of crystalline particles depended on other factors such as: the gradient of density of the liquid matrix and reinforcement, thermal processes connected with solidifying of the cast, processes leading to changes in physical and structural properties of liquid composite are also investigated. All simulation are carried out by CFD program Fluent. Numerical simulations are performed using the FLUENT two-phase free surface (air and matrix) unsteady flow model (volume of fluid model - VOF) and discrete phase model (DPM)

    20. DFT modeling of adsorption onto uranium metal using large-scale parallel computing

      SciTech Connect (OSTI)

      Davis, N.; Rizwan, U.

      2013-07-01

      There is a dearth of atomistic simulations involving the surface chemistry of 7-uranium which is of interest as the key fuel component of a breeder-burner stage in future fuel cycles. Recent availability of high-performance computing hardware and software has rendered extended quantum chemical surface simulations involving actinides feasible. With that motivation, data for bulk and surface 7-phase uranium metal are calculated in the plane-wave pseudopotential density functional theory method. Chemisorption of atomic hydrogen and oxygen on several un-relaxed low-index faces of 7-uranium is considered. The optimal adsorption sites (calculated cohesive energies) on the (100), (110), and (111) faces are found to be the one-coordinated top site (8.8 eV), four-coordinated center site (9.9 eV), and one-coordinated top 1 site (7.9 eV) respectively, for oxygen; and the four-coordinated center site (2.7 eV), four-coordinated center site (3.1 eV), and three-coordinated top2 site (3.2 eV) for hydrogen. (authors)

    1. Unveiling Stability Criteria of DNA-Carbon Nanotubes Constructs by Scanning Tunneling Microscopy and Computational Modeling

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Kilina, Svetlana; Yarotski, Dzmitry A.; Talin, A. Alec; Tretiak, Sergei; Taylor, Antoinette J.; Balatsky, Alexander V.

      2011-01-01

      We present a combined approach that relies on computational simulations and scanning tunneling microscopy (STM) measurements to reveal morphological properties and stability criteria of carbon nanotube-DNA (CNT-DNA) constructs. Application of STM allows direct observation of very stable CNT-DNA hybrid structures with the well-defined DNA wrapping angle of 63.4 ° and a coiling period of 3.3 nm. Using force field simulations, we determine how the DNA-CNT binding energy depends on the sequence and binding geometry of a single strand DNA. This dependence allows us to quantitatively characterize the stability of a hybrid structure with an optimal π-stacking between DNA nucleotides andmore » the tube surface and better interpret STM data. Our simulations clearly demonstrate the existence of a very stable DNA binding geometry for (6,5) CNT as evidenced by the presence of a well-defined minimum in the binding energy as a function of an angle between DNA strand and the nanotube chiral vector. This novel approach demonstrates the feasibility of CNT-DNA geometry studies with subnanometer resolution and paves the way towards complete characterization of the structural and electronic properties of drug-delivering systems based on DNA-CNT hybrids as a function of DNA sequence and a nanotube chirality.« less

    2. COMPUTATIONAL AND EXPERIMENTAL MODELING OF THREE-PHASE SLURRY-BUBBLE COLUMN REACTOR

      SciTech Connect (OSTI)

      Isaac K. Gamwo; Dimitri Gidaspow

      1999-09-01

      Considerable progress has been achieved in understanding three-phase reactors from the point of view of kinetic theory. In a paper in press for publication in Chemical Engineering Science (Wu and Gidaspow, 1999) we have obtained a complete numerical solution of bubble column reactors. In view of the complexity of the simulation a better understanding of the processes using simplified analytical solutions is required. Such analytical solutions are presented in the attached paper, Large Scale Oscillations or Gravity Waves in Risers and Bubbling Beds. This paper presents analytical solutions for bubbling frequencies and standing wave flow patterns. The flow patterns in operating slurry bubble column reactors are not optimum. They involve upflow in the center and downflow at the walls. It may be possible to control flow patterns by proper redistribution of heat exchangers in slurry bubble column reactors. We also believe that the catalyst size in operating slurry bubble column reactors is not optimum. To obtain an optimum size we are following up on the observation of George Cody of Exxon who reported a maximum granular temperature (random particle kinetic energy) for a particle size of 90 microns. The attached paper, Turbulence of Particles in a CFB and Slurry Bubble Columns Using Kinetic Theory, supports George Cody's observations. However, our explanation for the existence of the maximum in granular temperature differs from that proposed by George Cody. Further computer simulations and experiments involving measurements of granular temperature are needed to obtain a sound theoretical explanation for the possible existence of an optimum catalyst size.

    3. Development of an Extensible Computational Framework for Centralized Storage and Distributed Curation and Analysis of Genomic Data Genome-scale Metabolic Models

      SciTech Connect (OSTI)

      Stevens, Rick

      2010-08-01

      The DOE funded KBase project of the Stevens group at the University of Chicago was focused on four high-level goals: (i) improve extensibility, accessibility, and scalability of the SEED framework for genome annotation, curation, and analysis; (ii) extend the SEED infrastructure to support transcription regulatory network reconstructions (2.1), metabolic model reconstruction and analysis (2.2), assertions linked to data (2.3), eukaryotic annotation (2.4), and growth phenotype prediction (2.5); (iii) develop a web-API for programmatic remote access to SEED data and services; and (iv) application of all tools to bioenergy-related genomes and organisms. In response to these goals, we enhanced and improved the ModelSEED resource within the SEED to enable new modeling analyses, including improved model reconstruction and phenotype simulation. We also constructed a new website and web-API for the ModelSEED. Further, we constructed a comprehensive web-API for the SEED as a whole. We also made significant strides in building infrastructure in the SEED to support the reconstruction of transcriptional regulatory networks by developing a pipeline to identify sets of consistently expressed genes based on gene expression data. We applied this pipeline to 29 organisms, computing regulons which were subsequently stored in the SEED database and made available on the SEED website (http://pubseed.theseed.org). We developed a new pipeline and database for the use of kmers, or short 8-residue oligomer sequences, to annotate genomes at high speed. Finally, we developed the PlantSEED, or a new pipeline for annotating primary metabolism in plant genomes. All of the work performed within this project formed the early building blocks for the current DOE Knowledgebase system, and the kmer annotation pipeline, plant annotation pipeline, and modeling tools are all still in use in KBase today.

    4. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw (Los Alamos, NM); Gokhale, Maya B. (Los Alamos, NM); McCabe, Kevin Peter (Los Alamos, NM)

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    5. Mathematical and computational modeling of the diffraction problems by discrete singularities method

      SciTech Connect (OSTI)

      Nesvit, K. V.

      2014-11-12

      The main objective of this study is reduced the boundary-value problems of scattering and diffraction waves on plane-parallel structures to the singular or hypersingular integral equations. For these cases we use a method of the parametric representations of the integral and pseudo-differential operators. Numerical results of the model scattering problems on periodic and boundary gratings and also on the gratings above a flat screen reflector are presented in this paper.

    6. The Use Of Computational Human Performance Modeling As Task Analysis Tool

      SciTech Connect (OSTI)

      Jacuqes Hugo; David Gertman

      2012-07-01

      During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employed to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.

    7. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    8. Computational Intelligence Based Data Fusion Algorithm for Dynamic sEMG and Skeletal Muscle Force Modelling

      SciTech Connect (OSTI)

      Chandrasekhar Potluri,; Madhavi Anugolu; Marco P. Schoen; D. Subbaram Naidu

      2013-08-01

      In this work, an array of three surface Electrography (sEMG) sensors are used to acquired muscle extension and contraction signals for 18 healthy test subjects. The skeletal muscle force is estimated using the acquired sEMG signals and a Non-linear Wiener Hammerstein model, relating the two signals in a dynamic fashion. The model is obtained from using System Identification (SI) algorithm. The obtained force models for each sensor are fused using a proposed fuzzy logic concept with the intent to improve the force estimation accuracy and resilience to sensor failure or misalignment. For the fuzzy logic inference system, the sEMG entropy, the relative error, and the correlation of the force signals are considered for defining the membership functions. The proposed fusion algorithm yields an average of 92.49% correlation between the actual force and the overall estimated force output. In addition, the proposed fusionbased approach is implemented on a test platform. Experiments indicate an improvement in finger/hand force estimation.

    9. Data aNd Computation Reordering package using temporal and spatial hypergraphs

      Energy Science and Technology Software Center (OSTI)

      2004-08-01

      A package for experimentation with data and computation reordering algorithms. One can input various file formats representing sparse matrices, reorder data, and computation through the specification of command line parameters, and time benchmark computations that use the new data and computation ordering. The package includes existing reordering algorithms and new ones introduced by the authors based on the temporal and spatial locality hypergraph model.

    10. NREL: Water Power Research - Computer-Aided Engineering

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Engineering NREL is collaborating with other national laboratories, federal agencies, universities, and industry members to develop comprehensive and validated sets of computer-aided engineering modeling tools to accelerate the development of marine hydrokinetic technologies and improve the performance of hydroelectric facilities. Recent modeling efforts include: Wave Energy Converter Device and Array Modeling Current Device and Array Performance Modeling and Optimization Reference Model

    11. Computation & Simulation > Theory & Computation > Research >...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...

    12. A Computational Model of the Mark-IV Electrorefiner: Phase I -- Fuel Basket/Salt Interface

      SciTech Connect (OSTI)

      Robert Hoover; Supathorn Phongikaroon; Shelly Li; Michael Simpson; Tae-Sic Yoo

      2009-09-01

      Spent driver fuel from the Experimental Breeder Reactor-II (EBR-II) is currently being treated in the Mk-IV electrorefiner (ER) in the Fuel Conditioning Facility (FCF) at Idaho National Laboratory. The modeling approach to be presented here has been developed to help understand the effect of different parameters on the dynamics of this system. The first phase of this new modeling approach focuses on the fuel basket/salt interface involving the transport of various species found in the driver fuels (e.g. uranium and zirconium). This approach minimizes the guessed parameters to only one, the exchange current density (i0). U3+ and Zr4+ were the only species used for the current study. The result reveals that most of the total cell current is used for the oxidation of uranium, with little being used by zirconium. The dimensionless approach shows that the total potential is a strong function of i0 and a weak function of wt% of uranium in the salt system for initiation processes.

    13. Mathematical and Computational Epidemiology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    14. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    15. User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal-energy storage oupled with district-heating or cooling systems. Volume II. Appendices

      SciTech Connect (OSTI)

      Huber, H.D.; Brown, D.R.; Reilly, R.W.

      1982-04-01

      A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. the AQUASTOR Model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two prinicpal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains all the appendices, including supply and distribution system cost equations and models, descriptions of predefined residential districts, key equations for the cooling degree-hour methodology, a listing of the sample case output, and appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

    16. Inference of tumor evolution during chemotherapy by computational modeling and in situ analysis of genetic and phenotypic cellular diversity

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Almendro, Vanessa; Cheng, Yu -Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muñoz, Montse; Russnes, Hege  G.; Helland, Åslaug; et al

      2014-02-01

      Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatialmore » distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.« less

    17. Inference of tumor evolution during chemotherapy by computational modeling and in situ analysis of genetic and phenotypic cellular diversity

      SciTech Connect (OSTI)

      Almendro, Vanessa; Cheng, Yu -Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muoz, Montse; Russnes, Hege G.; Helland, slaug; Rye, Inga H.; Borresen-Dale, Anne -Lise; Maruyama, Reo; vanOudenaarden, Alexander; Dowsett, Mitchell; Jones, Robin L.; Reis-Filho, Jorge; Gascon, Pere; Gnen, Mithat; Michor, Franziska; Polyak, Kornelia

      2014-02-01

      Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatial distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.

    18. ASCR Workshop on Quantum Computing for Science

      SciTech Connect (OSTI)

      Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

      2015-06-01

      This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

    19. Computational Study of Bond Dissociation Enthalpies for Substituted $\\beta$-O-4 Lignin Model Compounds

      SciTech Connect (OSTI)

      Younker, Jarod M; Beste, Ariana; Buchanan III, A C

      2011-01-01

      The biopolymer lignin is a potential source of valuable chemicals. Phenethyl phenyl ether (PPE) is representative of the dominant $\\beta$-O-4 ether linkage. Density functional theory (DFT) is used to calculate the Boltzmann-weighted carbon-oxygen and carbon-carbon bond dissociation enthalpies (BDEs) of substituted PPE. These values are important in order to understand lignin decomposition. Exclusion of all conformers that have distributions of less than 5\\% at 298 K impacts the BDE by less than 1 kcal mol$^{-1}$. We find that aliphatic hydroxyl/methylhydroxyl substituents introduce only small changes to the BDEs (0-3 kcal mol$^{-1}$). Substitution on the phenyl ring at the $ortho$ position substantially lowers the C-O BDE, except in combination with the hydroxyl/methylhydroxyl substituents, where the effect of methoxy substitution is reduced by hydrogen bonding. Hydrogen bonding between the aliphatic substituents and the ether oxygen in the PPE derivatives has a significant influence on the BDE. CCSD(T)-calculated BDEs and hydrogen bond strengths of $ortho$-substituted anisoles when compared with M06-2X values confirm that the latter method is sufficient to describe the molecules studied and provide an important benchmark for lignin model compounds.

    20. Toward the standard population synthesis model of the X-ray background: Evolution of X-ray luminosity and absorption functions of active galactic nuclei including Compton-thick populations

      SciTech Connect (OSTI)

      Ueda, Yoshihiro; Akiyama, Masayuki; Hasinger, Gnther; Miyaji, Takamitsu; Watson, Michael G.

      2014-05-10

      We present the most up to date X-ray luminosity function (XLF) and absorption function of active galactic nuclei (AGNs) over the redshift range from 0 to 5, utilizing the largest, highly complete sample ever available obtained from surveys performed with Swift/BAT, MAXI, ASCA, XMM-Newton, Chandra, and ROSAT. The combined sample, including that of the Subaru/XMM-Newton Deep Survey, consists of 4039 detections in the soft (0.5-2 keV) and/or hard (>2 keV) band. We utilize a maximum likelihood method to reproduce the count rate versus redshift distribution for each survey, by taking into account the evolution of the absorbed fraction, the contribution from Compton-thick (CTK) AGNs, and broadband spectra of AGNs, including reflection components from tori based on the luminosity- and redshift-dependent unified scheme. We find that the shape of the XLF at z ? 1-3 is significantly different from that in the local universe, for which the luminosity-dependent density evolution model gives much better description than the luminosity and density evolution model. These results establish the standard population synthesis model of the X-ray background (XRB), which well reproduces the source counts, the observed fractions of CTK AGNs, and the spectrum of the hard XRB. The number ratio of CTK AGNs to the absorbed Compton-thin (CTN) AGNs is constrained to be ?0.5-1.6 to produce the 20-50 keV XRB intensity within present uncertainties, by assuming that they follow the same evolution as CTN AGNs. The growth history of supermassive black holes is discussed based on the new AGN bolometric luminosity function.

    1. TriBITS lifecycle model. Version 1.0, a lean/agile software lifecycle model for research-based computational science and engineering and applied mathematical software.

      SciTech Connect (OSTI)

      Willenbring, James M.; Bartlett, Roscoe Ainsworth; Heroux, Michael Allen

      2012-01-01

      Software lifecycles are becoming an increasingly important issue for computational science and engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process - respecting the competing needs of research vs. production - cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects that are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Here, we advocate three to four phases or maturity levels that address the appropriate handling of many issues associated with the transition from research to production software. The goals of this lifecycle model are to better communicate maturity levels with customers and to help to identify and promote Software Engineering (SE) practices that will help to improve productivity and produce better software. An important collection of software in this domain is Trilinos, which is used as the motivation and the initial target for this lifecycle model. However, many other related and similar CSE (and non-CSE) software projects can also make good use of this lifecycle model, especially those that use the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.

    2. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    3. Automated Office Systems Support (AOSS) Quality Assurance Model...

      Broader source: Energy.gov (indexed) [DOE]

      assurance model, including checklists, for activity relative to network and desktop computer support. Automated Office Systems Support (AOSS) Quality Assurance Model More...

    4. Information regarding previous INCITE awards including selected...

      Office of Science (SC) Website

      on Theory & Experiement (INCITE) ASCR Leadership Computing Challenge (ALCC) Computational Science Graduate Fellowship (CSGF) Research & Evaluation Prototypes (REP) Science...

    5. Inter-comparison of Computer Codes for TRISO-based Fuel Micro-Modeling and Performance Assessment

      SciTech Connect (OSTI)

      Brian Boer; Chang Keun Jo; Wen Wu; Abderrafi M. Ougouag; Donald McEachren; Francesco Venneri

      2010-10-01

      The Next Generation Nuclear Plant (NGNP), the Deep Burn Pebble Bed Reactor (DB-PBR) and the Deep Burn Prismatic Block Reactor (DB-PMR) are all based on fuels that use TRISO particles as their fundamental constituent. The TRISO particle properties include very high durability in radiation environments, hence the designs reliance on the TRISO to form the principal barrier to radioactive materials release. This durability forms the basis for the selection of this fuel type for applications such as Deep Bun (DB), which require exposures up to four times those expected for light water reactors. It follows that the study and prediction of the durability of TRISO particles must be carried as part of the safety and overall performance characterization of all the designs mentioned above. Such evaluations have been carried out independently by the performers of the DB project using independently developed codes. These codes, PASTA, PISA and COPA, incorporate models for stress analysis on the various layers of the TRISO particle (and of the intervening matrix material for some of them), model for fission products release and migration then accumulation within the SiC layer of the TRISO particle, just next to the layer, models for free oxygen and CO formation and migration to the same location, models for temperature field modeling within the various layers of the TRISO particle and models for the prediction of failure rates. All these models may be either internal to the code or external. This large number of models and the possibility of different constitutive data and model formulations and the possibility of a variety of solution techniques makes it highly unlikely that the model would give identical results in the modeling of identical situations. The purpose of this paper is to present the results of an inter-comparison between the codes and to identify areas of agreement and areas that need reconciliation. The inter-comparison has been carried out by the cooperating institutions using a set of pre-defined TRISO conditions (burnup levels, temperature or power levels, etc.) and the outcome will be tabulated in the full length paper. The areas of agreement will be pointed out and the areas that require further modeling or reconciliation will be shown. In general the agreement between the codes is good within less than one order of magnitude in the prediction of TRISO failure rates.

    6. Scientific computations section monthly report, November 1993

      SciTech Connect (OSTI)

      Buckner, M.R.

      1993-12-30

      This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

    7. Computational fluid dynamics modeling of chemical looping combustion process with calcium sulphate oxygen carrier - article no. A19

      SciTech Connect (OSTI)

      Baosheng Jin; Rui Xiao; Zhongyi Deng; Qilei Song

      2009-07-01

      To concentrate CO{sub 2} in combustion processes by efficient and energy-saving ways is a first and very important step for its sequestration. Chemical looping combustion (CLC) could easily achieve this goal. A chemical-looping combustion system consists of a fuel reactor and an air reactor. Two reactors in the form of interconnected fluidized beds are used in the process: (1) a fuel reactor where the oxygen carrier is reduced by reaction with the fuel, and (2) an air reactor where the reduced oxygen carrier from the fuel reactor is oxidized with air. The outlet gas from the fuel reactor consists of CO{sub 2} and H{sub 2}O, while the outlet gas stream from the air reactor contains only N{sub 2} and some unused O{sub 2}. The water in combustion products can be easily removed by condensation and pure carbon dioxide is obtained without any loss of energy for separation. Until now, there is little literature about mathematical modeling of chemical-looping combustion using the computational fluid dynamics (CFD) approach. In this work, the reaction kinetic model of the fuel reactor (CaSO{sub 4}+ H{sub 2}) is developed by means of the commercial code FLUENT and the effects of partial pressure of H{sub 2} (concentration of H{sub 2}) on chemical looping combustion performance are also studied. The results show that the concentration of H{sub 2} could enhance the CLC performance.

    8. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Widespread Hydrogen Fueling Infrastructure Is the Goal of H2FIRST Project Capabilities, Center for Infrastructure Research and Innovation (CIRI), Computational Modeling & Simulation, Energy, Energy Storage, Energy Storage Systems, Facilities, Infrastructure Security, Materials Science, Modeling, Modeling & Analysis, News, News & Events, Partnership, Research & Capabilities, Systems Analysis, Systems Engineering, Transportation Energy Widespread Hydrogen Fueling Infrastructure Is

    9. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

      SciTech Connect (OSTI)

      Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

      2013-11-15

      Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions.Conclusions: The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

    10. Argonne's Laboratory computing center - 2007 annual report.

      SciTech Connect (OSTI)

      Bair, R.; Pieper, G. W.

      2008-05-28

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    11. Parallel computing works

      SciTech Connect (OSTI)

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    12. Computational Modeling & Simulation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear Energy Nuclear

    13. Computational Modeling & Simulation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      3 - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear Energy Nuclear

    14. Computational Modeling & Simulation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear Energy Nuclear

    15. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and ...

    16. Computational Analysis of the Pyrolysis of ..beta..-O4 Lignin Model Compounds: Concerted vs. Homolytic Fragmentation

      SciTech Connect (OSTI)

      Clark, J. M.; Robichaud, D. J.; Nimlos, M. R.

      2012-01-01

      The thermochemical conversion of biomass to liquid transportation fuels is a very attractive technology for expanding the utilization of carbon neutral processes and reducing dependency on fossil fuel resources. As with all such emerging technologies, biomass conversion through gasification or pyrolysis has a number of obstacles that need to be overcome to make these processes cost competitive with the refining of fossil fuels. Our current efforts have focused on the investigation of the thermochemistry of the linkages between lignin units using ab initio calculations on dimeric lignin model compounds. All calculations were carried out using M062X density functional theory at the 6-311++G(d,p) basis set. The M062X method has been shown to be consistent with the CBS-QB3 method while being significantly less computationally expensive. To date we have only completed the study on the b-O4 compounds. The theoretical calculations performed in the study indicate that concerted elimination pathways dominate over bond homolysis reactions under typical pyrolysis conditions. However, this does not mean that concerted elimination will be the dominant loss process for lignin. Bimolecular radical chemistry could very well dwarf the unimolecular pathways investigated in this study. These concerted pathways tend to form stable, reasonably non-reactive products that would be more suited producing a fungible bio-oil for the production of liquid transportation fuels.

    17. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

      SciTech Connect (OSTI)

      Katya Le Blanc; Johanna Oxstrand

      2012-04-01

      The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

    18. Communication with U.S. federal decision makers : a primer with notes on the use of computer models as a means of communication.

      SciTech Connect (OSTI)

      Webb, Erik Karl; Tidwell, Vincent Carroll

      2009-10-01

      This document outlines ways to more effectively communicate with U.S. Federal decision makers by outlining the structure, authority, and motivations of various Federal groups, how to find the trusted advisors, and how to structure communication. All three branches of Federal governments have decision makers engaged in resolving major policy issues. The Legislative Branch (Congress) negotiates the authority and the resources that can be used by the Executive Branch. The Executive Branch has some latitude in implementation and prioritizing resources. The Judicial Branch resolves disputes. The goal of all decision makers is to choose and implement the option that best fits the needs and wants of the community. However, understanding the risk of technical, political and/or financial infeasibility and possible unintended consequences is extremely difficult. Primarily, decision makers are supported in their deliberations by trusted advisors who engage in the analysis of options as well as the day-to-day tasks associated with multi-party negotiations. In the best case, the trusted advisors use many sources of information to inform the process including the opinion of experts and if possible predictive analysis from which they can evaluate the projected consequences of their decisions. The paper covers the following: (1) Understanding Executive and Legislative decision makers - What can these decision makers do? (2) Finding the target audience - Who are the internal and external trusted advisors? (3) Packaging the message - How do we parse and integrate information, and how do we use computer simulation or models in policy communication?

    19. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

      SciTech Connect (OSTI)

      Saffer, Shelley I.

      2014-12-01

      This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

    20. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    1. Technology for Increasing Geothermal Energy Productivity. Computer Models to Characterize the Chemical Interactions of Goethermal Fluids and Injectates with Reservoir Rocks, Wells, Surface Equiptment

      SciTech Connect (OSTI)

      Nancy Moller Weare

      2006-07-25

      This final report describes the results of a research program we carried out over a five-year (3/1999-9/2004) period with funding from a Department of Energy geothermal FDP grant (DE-FG07-99ID13745) and from other agencies. The goal of research projects in this program were to develop modeling technologies that can increase the understanding of geothermal reservoir chemistry and chemistry-related energy production processes. The ability of computer models to handle many chemical variables and complex interactions makes them an essential tool for building a fundamental understanding of a wide variety of complex geothermal resource and production chemistry. With careful choice of methodology and parameterization, research objectives were to show that chemical models can correctly simulate behavior for the ranges of fluid compositions, formation minerals, temperature and pressure associated with present and near future geothermal systems as well as for the very high PT chemistry of deep resources that is intractable with traditional experimental methods. Our research results successfully met these objectives. We demonstrated that advances in physical chemistry theory can be used to accurately describe the thermodynamics of solid-liquid-gas systems via their free energies for wide ranges of composition (X), temperature and pressure. Eight articles on this work were published in peer-reviewed journals and in conference proceedings. Four are in preparation. Our work has been presented at many workshops and conferences. We also considerably improved our interactive web site (geotherm.ucsd.edu), which was in preliminary form prior to the grant. This site, which includes several model codes treating different XPT conditions, is an effective means to transfer our technologies and is used by the geothermal community and other researchers worldwide. Our models have wide application to many energy related and other important problems (e.g., scaling prediction in petroleum production systems, stripping towers for mineral production processes, nuclear waste storage, CO2 sequestration strategies, global warming). Although funding decreases cut short completion of several research activities, we made significant progress on these abbreviated projects.

    2. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

      SciTech Connect (OSTI)

      Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

      2011-02-01

      Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

    3. Programs | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      INCITE Program ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Featured Science Snapshot of the global structure of a radiation-dominated accretion flow around a black hole computed using the Athena++ code Magnetohydrodynamic Models of Accretion Including Radiation Transport James Stone Allocation Program: INCITE Allocation Hours: 47 Million

    4. Trends and challenges when including microstructure in materials...

      Office of Scientific and Technical Information (OSTI)

      Trends and challenges when including microstructure in materials modeling: Examples of ... Citation Details In-Document Search Title: Trends and challenges when including ...

    5. Computational Fluid Dynamics Modeling of the Bonneville Project: Tailrace Spill Patterns for Low Flows and Corner Collector Smolt Egress

      SciTech Connect (OSTI)

      Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.; Perkins, William A.

      2010-12-01

      In 2003, an extension of the existing ice and trash sluiceway was added at Bonneville Powerhouse 2 (B2). This extension started at the existing corner collector for the ice and trash sluiceway adjacent to Bonneville Powerhouse 2 and the new sluiceway was extended to the downstream end of Cascade Island. The sluiceway was designed to improve juvenile salmon survival by bypassing turbine passage at B2, and placing these smolt in downstream flowing water minimizing their exposure to fish and avian predators. In this study, a previously developed computational fluid dynamics model was modified and used to characterized tailrace hydraulics and sluiceway egress conditions for low total river flows and low levels of spillway flow. STAR-CD v4.10 was used for seven scenarios of low total river flow and low spill discharges. The simulation results were specifically examined to look at tailrace hydraulics at 5 ft below the tailwater elevation, and streamlines used to compare streamline pathways for streamlines originating in the corner collector outfall and adjacent to the outfall. These streamlines indicated that for all higher spill percentage cases (25% and greater) that streamlines from the corner collector did not approach the shoreline at the downstream end of Bradford Island. For the cases with much larger spill percentages, the streamlines from the corner collector were mid-channel or closer to the Washington shore as they moved downstream. Although at 25% spill at 75 kcfs total river, the total spill volume was sufficient to "cushion" the flow from the corner collector from the Bradford Island shore, areas of recirculation were modeled in the spillway tailrace. However, at the lowest flows and spill percentages, the streamlines from the B2 corner collector pass very close to the Bradford Island shore. In addition, the very flow velocity flows and large areas of recirculation greatly increase potential predator exposure of the spillway passed smolt. If there is concern for egress issues for smolt passing through the spillway, the spill pattern and volume need to be revisited.

    6. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      PVLibMatlab Permalink Gallery Sandia Labs Releases New Version of PVLib Toolbox Modeling, News, Photovoltaic, Solar Sandia Labs Releases New Version of PVLib Toolbox Sandia has released version 1.3 of PVLib, its widely used Matlab toolbox for modeling photovoltaic (PV) power systems. The version 1.3 release includes the following added functions: functions to estimate parameters for popular PV module models, including PVsyst and the CEC '5 parameter' model a new model of the effects of solar

    7. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2016-02-01 08:07:08

    8. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    9. Computational Tools for Accelerating Carbon Capture Process Development

      SciTech Connect (OSTI)

      Miller, David; Sahinidis, N.V,; Cozad, A; Lee, A; Kim, H; Morinelly, J.; Eslick, J.; Yuan, Z.

      2013-06-04

      This presentation reports development of advanced computational tools to accelerate next generation technology development. These tools are to develop an optimized process using rigorous models. They include: Process Models; Simulation-Based Optimization; Optimized Process; Uncertainty Quantification; Algebraic Surrogate Models; and Superstructure Optimization (Determine Configuration).

    10. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    11. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    12. Development and Verification of a Computational Fluid Dynamics Model of a Horizontal-Axis Tidal Current Turbine

      SciTech Connect (OSTI)

      Lawson, Mi. J.; Li, Y.; Sale, D. C.

      2011-01-01

      This paper describes the development of a computational fluid dynamics (CFD) methodology to simulate the hydrodynamics of horizontal-axis tidal current turbines (HATTs). First, an HATT blade was designed using the blade element momentum method in conjunction with a genetic optimization algorithm. Several unstructured computational grids were generated using this blade geometry and steady CFD simulations were used to perform a grid resolution study. Transient simulations were then performed to determine the effect of time-dependent flow phenomena and the size of the computational timestep on the numerical solution. Qualitative measures of the CFD solutions were independent of the grid resolution. Conversely, quantitative comparisons of the results indicated that the use of coarse computational grids results in an under prediction of the hydrodynamic forces on the turbine blade in comparison to the forces predicted using more resolved grids. For the turbine operating conditions considered in this study, the effect of the computational timestep on the CFD solution was found to be minimal, and the results from steady and transient simulations were in good agreement. Additionally, the CFD results were compared to corresponding blade element momentum method calculations and reasonable agreement was shown. Nevertheless, we expect that for other turbine operating conditions, where the flow over the blade is separated, transient simulations will be required.

    13. Development and Verification of a Computational Fluid Dynamics Model of a Horizontal-Axis Tidal Current Turbine

      SciTech Connect (OSTI)

      Lawson, M. J.; Li, Y.; Sale, D. C.

      2011-10-01

      This paper describes the development of a computational fluid dynamics (CFD) methodology to simulate the hydrodynamics of horizontal-axis tidal current turbines. Qualitative measures of the CFD solutions were independent of the grid resolution. Conversely, quantitative comparisons of the results indicated that the use of coarse computational grids results in an under prediction of the hydrodynamic forces on the turbine blade in comparison to the forces predicted using more resolved grids. For the turbine operating conditions considered in this study, the effect of the computational timestep on the CFD solution was found to be minimal, and the results from steady and transient simulations were in good agreement. Additionally, the CFD results were compared to corresponding blade element momentum method calculations and reasonable agreement was shown. Nevertheless, we expect that for other turbine operating conditions, where the flow over the blade is separated, transient simulations will be required.

    14. Seizure control with thermal energy? Modeling of heat diffusivity in brain tissue and computer-based design of a prototype mini-cooler.

      SciTech Connect (OSTI)

      Osario, I.; Chang, F.-C.; Gopalsami, N.; Nuclear Engineering Division; Univ. of Kansas

      2009-10-01

      Automated seizure blockage is a top priority in epileptology. Lowering nervous tissue temperature below a certain level suppresses abnormal neuronal activity, an approach with certain advantages over electrical stimulation, the preferred investigational therapy for pharmacoresistant seizures. A computer model was developed to identify an efficient probe design and parameters that would allow cooling of brain tissue by no less than 21 C in 30 s, maximum. The Pennes equation and the computer code ABAQUS were used to investigate the spatiotemporal behavior of heat diffusivity in brain tissue. Arrays of distributed probes deliver sufficient thermal energy to decrease, inhomogeneously, brain tissue temperature from 37 to 20 C in 30 s and from 37 to 15 C in 60 s. Tissue disruption/loss caused by insertion of this probe is considerably less than that caused by ablative surgery. This model may be applied for the design and development of cooling devices for seizure control.

    15. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

      SciTech Connect (OSTI)

      Zhang, Jing Ghate, Sujata V.; Yoon, Sora C.; Lo, Joseph Y.; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

      2014-09-15

      Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different from 0.5 (p < 0.0001). For the 7 residents only, the AUC performance of the models was 0.590 (95% CI,0.537-0.642) and was also significantly higher than 0.5 (p = 0.0009). Therefore, generally the authors’ models were able to predict which masses were detected and which were missed better than chance. Conclusions: The authors proposed an algorithm that was able to predict which masses will be detected and which will be missed by each individual trainee. This confirms existence of error-making patterns in the detection of masses among radiology trainees. Furthermore, the proposed methodology will allow for the optimized selection of difficult cases for the trainees in an automatic and efficient manner.

    16. Computing and Computational Sciences Directorate - Computer Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      AWARD Winners: Jess Gehin; Jackie Isaacs; Douglas Kothe; Debbie McCoy; Bonnie Nestor; John Turner; Gilbert Weigand Organization(s): Nuclear Technology Program; Computing and...

    17. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences and Engineering The Computational Sciences and Engineering Division (CSED) is ORNL's premier source of basic and applied research in the field of data sciences and knowledge discovery. CSED's science agenda is focused on research and development related to knowledge discovery enabled by the explosive growth in the availability, size, and variability of dynamic and disparate data sources. This science agenda encompasses data sciences as well as advanced modeling and

    18. Computer memory management system

      DOE Patents [OSTI]

      Kirk, III, Whitson John

      2002-01-01

      A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

    19. Climate Models: Rob Jacob | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      science & technology Environmental modeling tools Programs Mathematics, computing, & computer science Modeling, simulation, & visualization Rob Jacob, Computational Climate...

    20. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      diffuse interface methods in ALE-AMR code with application in modeling NDCX-II experiments Wangyi Liu 1 , John Barnard 2 , Alex Friedman 2 , Nathan Masters 2 , Aaron Fisher 2 , Alice Koniges 2 , David Eder 2 1 LBNL, USA, 2 LLNL, USA This work was part of the Petascale Initiative in Computational Science at NERSC, supported by the Director, Office of Science, Advanced Scientific Computing Research, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work was performed

    1. Methods and computer executable instructions for rapidly calculating simulated particle transport through geometrically modeled treatment volumes having uniform volume elements for use in radiotherapy

      DOE Patents [OSTI]

      Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.

      2001-01-16

      Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of time magnitudes.

    2. Analytical and computational study of the ideal full two-fluid plasma model and asymptotic approximations for Hall-magnetohydrodynamics

      SciTech Connect (OSTI)

      Srinivasan, B.; Shumlak, U.

      2011-09-15

      The 5-moment two-fluid plasma model uses Euler equations to describe the ion and electron fluids and Maxwell's equations to describe the electric and magnetic fields. Two-fluid physics becomes significant when the characteristic spatial scales are on the order of the ion skin depth and characteristic time scales are on the order of the ion cyclotron period. The full two-fluid plasma model has disparate characteristic speeds ranging from the ion and electron speeds of sound to the speed of light. Two asymptotic approximations are applied to the full two-fluid plasma to arrive at the Hall-MHD model, namely negligible electron inertia and infinite speed of light. The full two-fluid plasma model and the Hall-MHD model are studied for applications to an electromagnetic plasma shock, geospace environmental modeling (GEM challenge) magnetic reconnection, an axisymmetric Z-pinch, and an axisymmetric field reversed configuration (FRC).

    3. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ADTSC » CCS » CCS-7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These

    4. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    5. Mechanism and computational model for Lyman-{alpha}-radiation generation by high-intensity-laser four-wave mixing in Kr-Ar gas

      SciTech Connect (OSTI)

      Louchev, Oleg A.; Saito, Norihito; Wada, Satoshi; Bakule, Pavel; Yokoyama, Koji; Ishida, Katsuhiko; Iwasaki, Masahiko

      2011-09-15

      We present a theoretical model combined with a computational study of a laser four-wave mixing process under optical discharge in which the non-steady-state four-wave amplitude equations are integrated with the kinetic equations of initial optical discharge and electron avalanche ionization in Kr-Ar gas. The model is validated by earlier experimental data showing strong inhibition of the generation of pulsed, tunable Lyman-{alpha} (Ly-{alpha}) radiation when using sum-difference frequency mixing of 212.6 nm and tunable infrared radiation (820-850 nm). The rigorous computational approach to the problem reveals the possibility and mechanism of strong auto-oscillations in sum-difference resonant Ly-{alpha} generation due to the combined effect of (i) 212.6-nm (2+1)-photon ionization producing initial electrons, followed by (ii) the electron avalanche dominated by 843-nm radiation, and (iii) the final breakdown of the phase matching condition. The model shows that the final efficiency of Ly-{alpha} radiation generation can achieve a value of {approx}5x10{sup -4} which is restricted by the total combined absorption of the fundamental and generated radiation.

    6. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2014-12-30

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    7. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2015-01-27

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    8. Argonne's Laboratory computing resource center : 2006 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

      2007-05-31

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    9. Multi-processor including data flow accelerator module

      DOE Patents [OSTI]

      Davidson, George S.; Pierce, Paul E.

      1990-01-01

      An accelerator module for a data flow computer includes an intelligent memory. The module is added to a multiprocessor arrangement and uses a shared tagged memory architecture in the data flow computer. The intelligent memory module assigns locations for holding data values in correspondence with arcs leading to a node in a data dependency graph. Each primitive computation is associated with a corresponding memory cell, including a number of slots for operands needed to execute a primitive computation, a primitive identifying pointer, and linking slots for distributing the result of the cell computation to other cells requiring that result as an operand. Circuitry is provided for utilizing tag bits to determine automatically when all operands required by a processor are available and for scheduling the primitive for execution in a queue. Each memory cell of the module may be associated with any of the primitives, and the particular primitive to be executed by the processor associated with the cell is identified by providing an index, such as the cell number for the primitive, to the primitive lookup table of starting addresses. The module thus serves to perform functions previously performed by a number of sections of data flow architectures and coexists with conventional shared memory therein. A multiprocessing system including the module operates in a hybrid mode, wherein the same processing modules are used to perform some processing in a sequential mode, under immediate control of an operating system, while performing other processing in a data flow mode.

    10. Numerical simulations for low energy nuclear reactions including...

      Office of Scientific and Technical Information (OSTI)

      Numerical simulations for low energy nuclear reactions including direct channels to validate statistical models Citation Details In-Document Search Title: Numerical simulations for...

    11. BLT-EC (Breach, Leach and Transport-Equilibrium Chemistry) data input guide. A computer model for simulating release and coupled geochemical transport of contaminants from a subsurface disposal facility

      SciTech Connect (OSTI)

      MacKinnon, R.J.; Sullivan, T.M.; Kinsey, R.R.

      1997-05-01

      The BLT-EC computer code has been developed, implemented, and tested. BLT-EC is a two-dimensional finite element computer code capable of simulating the time-dependent release and reactive transport of aqueous phase species in a subsurface soil system. BLT-EC contains models to simulate the processes (container degradation, waste-form performance, transport, chemical reactions, and radioactive production and decay) most relevant to estimating the release and transport of contaminants from a subsurface disposal system. Water flow is provided through tabular input or auxiliary files. Container degradation considers localized failure due to pitting corrosion and general failure due to uniform surface degradation processes. Waste-form performance considers release to be limited by one of four mechanisms: rinse with partitioning, diffusion, uniform surface degradation, and solubility. Transport considers the processes of advection, dispersion, diffusion, chemical reaction, radioactive production and decay, and sources (waste form releases). Chemical reactions accounted for include complexation, sorption, dissolution-precipitation, oxidation-reduction, and ion exchange. Radioactive production and decay in the waste form is simulated. To improve the usefulness of BLT-EC, a pre-processor, ECIN, which assists in the creation of chemistry input files, and a post-processor, BLTPLOT, which provides a visual display of the data have been developed. BLT-EC also includes an extensive database of thermodynamic data that is also accessible to ECIN. This document reviews the models implemented in BLT-EC and serves as a guide to creating input files and applying BLT-EC.

    12. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    13. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved within B 174. Use

    14. Computer-Aided Engineering of Batteries for Designing Better Li-Ion Batteries (Presentation)

      SciTech Connect (OSTI)

      Pesaran, A.; Kim, G. H.; Smith, K.; Lee, K. J.; Santhanagopalan, S.

      2012-02-01

      This presentation describes the current status of the DOE's Energy Storage R and D program, including modeling and design tools and the Computer-Aided Engineering for Automotive Batteries (CAEBAT) program.

    15. Computed solid phases limiting the concentration of dissolved constituents in basalt aquifers of the Columbia Plateau in eastern Washington. Geochemical modeling and nuclide/rock/groundwater interaction studies

      SciTech Connect (OSTI)

      Deutsch, W.J.; Jenne, E.A.; Krupka, K.M.

      1982-08-01

      A speciation-solubility geochemical model, WATEQ2, was used to analyze geographically-diverse, ground-water samples from the aquifers of the Columbia Plateau basalts in eastern Washington. The ground-water samples compute to be at equilibrium with calcite, which provides both a solubility control for dissolved calcium and a pH buffer. Amorphic ferric hydroxide, Fe(OH)/sub 3/(A), is at saturation or modestly oversaturated in the few water samples with measured redox potentials. Most of the ground-water samples compute to be at equilibrium with amorphic silica (glass) and wairakite, a zeolite, and are saturated to oversaturated with respect to allophane, an amorphic aluminosilicate. The water samples are saturated to undersaturated with halloysite, a clay, and are variably oversaturated with regard to other secondary clay minerals. Equilibrium between the ground water and amorphic silica presumably results from the dissolution of the glassy matrix of the basalt. The oversaturation of the clay minerals other than halloysite indicates that their rate of formation lags the dissolution rate of the basaltic glass. The modeling results indicate that metastable amorphic solids limit the concentration of dissolved silicon and suggest the same possibility for aluminum and iron, and that the processes of dissolution of basaltic glass and formation of metastable secondary minerals are continuing even though the basalts are of Miocene age. The computed solubility relations are found to agree with the known assemblages of alteration minerals in the basalt fractures and vesicles. Because the chemical reactivity of the bedrock will influence the transport of solutes in ground water, the observed solubility equilibria are important factors with regard to chemical-retention processes associated with the possible migration of nuclear waste stored in the earth's crust.

    16. Atomic-Scale Design of Iron Fischer-Tropsch Catalysts; A Combined Computational Chemistry, Experimental, and Microkinetic Modeling Approach

      SciTech Connect (OSTI)

      Manos Mavrikakis; James Dumesic; Rahul Nabar; Calvin Bartholonew; Hu Zou; Uchenna Paul

      2008-09-29

      This work focuses on (1) searching/summarizing published Fischer-Tropsch synthesis (FTS) mechanistic and kinetic studies of FTS reactions on iron catalysts; (2) preparation and characterization of unsupported iron catalysts with/without potassium/platinum promoters; (3) measurement of H{sub 2} and CO adsorption/dissociation kinetics on iron catalysts using transient methods; (3) analysis of the transient rate data to calculate kinetic parameters of early elementary steps in FTS; (4) construction of a microkinetic model of FTS on iron, and (5) validation of the model from collection of steady-state rate data for FTS on iron catalysts. Three unsupported iron catalysts and three alumina-supported iron catalysts were prepared by non-aqueous-evaporative deposition (NED) or aqueous impregnation (AI) and characterized by chemisorption, BET, temperature-programmed reduction (TPR), extent-of-reduction, XRD, and TEM methods. These catalysts, covering a wide range of dispersions and metal loadings, are well-reduced and relatively thermally stable up to 500-600 C in H{sub 2} and thus ideal for kinetic and mechanistic studies. Kinetic parameters for CO adsorption, CO dissociation, and surface carbon hydrogenation on these catalysts were determined from temperature-programmed desorption (TPD) of CO and temperature programmed surface hydrogenation (TPSR), temperature-programmed hydrogenation (TPH), and isothermal, transient hydrogenation (ITH). A microkinetic model was constructed for the early steps in FTS on polycrystalline iron from the kinetic parameters of elementary steps determined experimentally in this work and from literature values. Steady-state rate data were collected in a Berty reactor and used for validation of the microkinetic model. These rate data were fitted to 'smart' Langmuir-Hinshelwood rate expressions derived from a sequence of elementary steps and using a combination of fitted steady-state parameters and parameters specified from the transient measurements. The results provide a platform for further development of microkinetic models of FTS on Fe and a basis for more precise modeling of FTS activity of Fe catalysts. Calculations using periodic, self-consistent Density Functional Theory (DFT) methods were performed on various realistic models of industrial, Fe-based FTS catalysts. Close-packed, most stable Fe(110) facet was analyzed and subsequently carbide formation was found to be facile leading to the choice of the FeC(110) model representing a Fe facet with a sub-surface C atom. The Pt adatom (Fe{sup Pt}(110)) was found to be the most stable model for our studies into Pt promotion and finally the role of steps was elucidated by recourse to the defected Fe(211) facet. Binding Energies(BEs), preferred adsorption sites and geometries for all FTS relevant stable species and intermediates were evaluated on each model catalyst facet. A mechanistic model (comprising of 32 elementary steps involving 19 species) was constructed and each elementary step therein was fully characterized with respect to its thermochemistry and kinetics. Kinetic calculations involved evaluation of the Minimum Energy Pathways (MEPs) and activation energies (barriers) for each step. Vibrational frequencies were evaluated for the preferred adsorption configuration of each species with the aim of evaluating entropy-changes, pre exponential factors and serving as a useful connection with experimental surface science techniques. Comparative analysis among these four facets revealed important trends in their relative behavior and roles in FTS catalysis. Overall the First Principles Calculations afforded us a new insight into FTS catalysis on Fe and modified-Fe catalysts.

    17. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center for Computational Sciences

    18. Power throttling of collections of computing elements

      DOE Patents [OSTI]

      Bellofatto, Ralph E. (Ridgefield, CT); Coteus, Paul W. (Yorktown Heights, NY); Crumley, Paul G. (Yorktown Heights, NY); Gara, Alan G. (Mount Kidsco, NY); Giampapa, Mark E. (Irvington, NY); Gooding; Thomas M. (Rochester, MN); Haring, Rudolf A. (Cortlandt Manor, NY); Megerian, Mark G. (Rochester, MN); Ohmacht, Martin (Yorktown Heights, NY); Reed, Don D. (Mantorville, MN); Swetz, Richard A. (Mahopac, NY); Takken, Todd (Brewster, NY)

      2011-08-16

      An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

    19. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the microgrid.

    20. Application of a watershed computer model to assess reclaimed landform stability in support of reclamation liability release

      SciTech Connect (OSTI)

      Peterson, M.R.; Zevenbergen, L.W.; Cochran, J.

      1995-09-01

      The Surface Mining Control and Reclamation Act of 1977 (SMCRA) instituted specific requirements for surface coal mine reclamation that included reclamation bonding and tied release of liability to achieving acceptable reclamation standards. Generally, such reclamation standards include successfully revegetating the site, achieving the approved postmine land use and minimizing disturbances to the prevailing hydrologic balance. For western surface coal mines the period of liability continues for a minimum of 10 years commencing with the last year of augmented seeding, fertilizing, irrigation or other work. This paper describes the methods and procedures conducted to evaluate the runoff and sediment yield response from approximately 2,700 acres of reclaimed lands at Peabody Western Coal Company`s (PWCC) Black Mesa Mine located near Kayenta, Arizona. These analyses were conducted in support of an application for liability release submitted to the Office of Surface Mining (OSM) for reclaimed interim land parcels within the 2,700 acres evaluated.

    1. Properties of a soft-core model of methanol: An integral equation theory and computer simulation study

      SciTech Connect (OSTI)

      Hu, Matej; Urbic, Tomaz; Muna, Gianmarco

      2014-10-28

      Thermodynamic and structural properties of a coarse-grained model of methanol are examined by Monte Carlo simulations and reference interaction site model (RISM) integral equation theory. Methanol particles are described as dimers formed from an apolar Lennard-Jones sphere, mimicking the methyl group, and a sphere with a core-softened potential as the hydroxyl group. Different closure approximations of the RISM theory are compared and discussed. The liquid structure of methanol is investigated by calculating site-site radial distribution functions and static structure factors for a wide range of temperatures and densities. Results obtained show a good agreement between RISM and Monte Carlo simulations. The phase behavior of methanol is investigated by employing different thermodynamic routes for the calculation of the RISM free energy, drawing gas-liquid coexistence curves that match the simulation data. Preliminary indications for a putative second critical point between two different liquid phases of methanol are also discussed.

    2. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    3. Numerical simulations for low energy nuclear reactions including direct

      Office of Scientific and Technical Information (OSTI)

      channels to validate statistical models (Conference) | SciTech Connect Numerical simulations for low energy nuclear reactions including direct channels to validate statistical models Citation Details In-Document Search Title: Numerical simulations for low energy nuclear reactions including direct channels to validate statistical models Authors: Kawano, Toshihiko [1] + Show Author Affiliations Los Alamos National Laboratory [Los Alamos National Laboratory Publication Date: 2014-01-08 OSTI

    4. Unitarity bounds in the Higgs model including triplet fields...

      Office of Scientific and Technical Information (OSTI)

      fine-tuning because of the imposed global SU(2)sub R symmetry in the Higgs ... Country of Publication: United States Language: English Subject: 72 PHYSICS OF ELEMENTARY ...

    5. Comparison of Joint Modeling Approaches Including Eulerian Sliding...

      Office of Scientific and Technical Information (OSTI)

      In all cases the sliding interfaces are tracked explicitly without homogenization or blending the joint and block response into an average response. In general, rock joints will ...

    6. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes There are currently 2632 nodes available on PDSF. The compute (batch) nodes at PDSF are heterogenous, reflecting the periodic procurement of new nodes (and the eventual retirement of old nodes). From the user's perspective they are essentially all equivalent except that some have more memory per job slot. If your jobs have memory requirements beyond the default maximum of 1.1GB you should specify that in your job submission and the batch system will run your job on an

    7. Compositional modeling in porous media using constant volume flash and flux computation without the need for phase identification

      SciTech Connect (OSTI)

      Polvka, Ond?ej Mikyka, Ji?

      2014-09-01

      The paper deals with the numerical solution of a compositional model describing compressible two-phase flow of a mixture composed of several components in porous media with species transfer between the phases. The mathematical model is formulated by means of the extended Darcy's laws for all phases, components continuity equations, constitutive relations, and appropriate initial and boundary conditions. The splitting of components among the phases is described using a new formulation of the local thermodynamic equilibrium which uses volume, temperature, and moles as specification variables. The problem is solved numerically using a combination of the mixed-hybrid finite element method for the total flux discretization and the finite volume method for the discretization of transport equations. A new approach to numerical flux approximation is proposed, which does not require the phase identification and determination of correspondence between the phases on adjacent elements. The time discretization is carried out by the backward Euler method. The resulting large system of nonlinear algebraic equations is solved by the NewtonRaphson iterative method. We provide eight examples of different complexity to show reliability and robustness of our approach.

    8. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    9. Atomic-Scale Design of Iron Fischer-Tropsch Catalysts: A Combined Computational Chemistry, Experimental, and Microkinetic Modeling Approach

      SciTech Connect (OSTI)

      Manos Mavrikakis; James A. Dumesic; Rahul P. Nabar

      2006-09-29

      Work continued on the development of a microkinetic model of Fischer-Tropsch synthesis (FTS) on supported and unsupported Fe catalysts. The following aspects of the FT mechanism on unsupported iron catalysts were investigated on during this third year: (1) the collection of rate data in a Berty CSTR reactor based on sequential design of experiments; (2) CO adsorption and CO-TPD for obtaining the heat of adsorption of CO on polycrystalline iron; and (3) isothermal hydrogenation (IH) after Fischer Tropsch reaction to identify and quantify surface carbonaceous species. Rates of C{sub 2+} formation on unsupported iron catalysts at 220 C and 20 atm correlated well to a Langmuir-Hinshelwood type expression, derived assuming carbon hydrogenation to CH and OH recombination to water to be rate-determining steps. From desorption of molecularly adsorbed CO at different temperatures the heat of adsorption of CO on polycrystalline iron was determined to be 100 kJ/mol. Amounts and types of carbonaceous species formed after FT reaction for 5-10 minutes at 150, 175, 200 and 285 C vary significantly with temperature. Mr. Brian Critchfield completed his M.S. thesis work on a statistically designed study of the kinetics of FTS on 20% Fe/alumina. Preparation of a paper describing this work is in progress. Results of these studies were reported at the Annual Meeting of the Western States Catalysis and at the San Francisco AIChE meeting. In the coming period, studies will focus on quantitative determination of the rates of kinetically-relevant elementary steps on unsupported Fe catalysts with/without K and Pt promoters by SSITKA method. This study will help us to (1) understand effects of promoter and support on elementary kinetic parameters and (2) build a microkinetics model for FTS on iron. Calculations using periodic, self-consistent Density Functional Theory (DFT) methods were performed on models of defected Fe surfaces, most significantly the stepped Fe(211) surface. Binding Energies (BE's), preferred adsorption sites and geometries of all the FTS relevant stable species and intermediates were evaluated. Each elementary step of our reaction model was fully characterized with respect to its thermochemistry and comparisons between the stepped Fe(211) facet and the most-stable Fe(110) facet were established. In most cases the BE's on Fe(211) reflected the trends observed earlier on Fe(110), yet there were significant variations imposed on the underlying trends. Vibrational frequencies were evaluated for the preferred adsorption configurations of each species with the aim of evaluating the entropy-changes and preexponential factors for each elementary step. Kinetic studies were performed for the early steps of FTS (up to CH{sub 4} formation) and CO dissociation. This involved evaluation of the Minimum Energy Pathway (MEP) and activation energy barrier for the steps involved. We concluded that Fe(211) would allow for far more facile CO dissociation in comparison to other Fe catalysts studied so far, but the other FTS steps studied remained mostly unchanged.

    10. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Exascale Computing CoDEx Project: A Hardware/Software Codesign Environment for the Exascale Era The next decade will see a rapid evolution of HPC node architectures as power and cooling constraints are limiting increases in microprocessor clock speeds and constraining data movement. Applications and algorithms will need to change and adapt as node architectures evolve. A key element of the strategy as we move forward is the co-design of applications, architectures and programming

    11. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    12. Topic A Note: Includes STEPS Subtopic

      Energy Savers [EERE]

      Topic A Note: Includes STEPS Subtopic 33 Total Projects Developing and Enhancing Workforce Training Programs

    13. Semiconductor Device Analysis on Personal Computers

      Energy Science and Technology Software Center (OSTI)

      1993-02-08

      PC-1D models the internal operation of bipolar semiconductor devices by solving for the concentrations and quasi-one-dimensional flow of electrons and holes resulting from either electrical or optical excitation. PC-1D uses the same detailed physical models incorporated in mainframe computer programs, yet runs efficiently on personal computers. PC-1D was originally developed with DOE funding to analyze solar cells. That continues to be its primary mode of usage, with registered copies in regular use at more thanmore » 100 locations worldwide. The program has been successfully applied to the analysis of silicon, gallium-arsenide, and indium-phosphide solar cells. The program is also suitable for modeling bipolar transistors and diodes, including heterojunction devices. Its easy-to-use graphical interface makes it useful as a teaching tool as well.« less

    14. Light output measurements and computational models of microcolumnar CsI scintillators for x-ray imaging

      SciTech Connect (OSTI)

      Nillius, Peter Klamra, Wlodek; Danielsson, Mats; Sibczynski, Pawel; Sharma, Diksha; Badano, Aldo

      2015-02-15

      Purpose: The authors report on measurements of light output and spatial resolution of microcolumnar CsI:Tl scintillator detectors for x-ray imaging. In addition, the authors discuss the results of simulations aimed at analyzing the results of synchrotron and sealed-source exposures with respect to the contributions of light transport to the total light output. Methods: The authors measured light output from a 490-?m CsI:Tl scintillator screen using two setups. First, the authors used a photomultiplier tube (PMT) to measure the response of the scintillator to sealed-source exposures. Second, the authors performed imaging experiments with a 27-keV monoenergetic synchrotron beam and a slit to calculate the total signal generated in terms of optical photons per keV. The results of both methods are compared to simulations obtained with hybridMANTIS, a coupled x-ray, electron, and optical photon Monte Carlo transport package. The authors report line response (LR) and light output for a range of linear absorption coefficients and describe a model that fits at the same time the light output and the blur measurements. Comparing the experimental results with the simulations, the authors obtained an estimate of the absorption coefficient for the model that provides good agreement with the experimentally measured LR. Finally, the authors report light output simulation results and their dependence on scintillator thickness and reflectivity of the backing surface. Results: The slit images from the synchrotron were analyzed to obtain a total light output of 48 keV{sup ?1} while measurements using the fast PMT instrument setup and sealed-sources reported a light output of 28 keV{sup ?1}. The authors attribute the difference in light output estimates between the two methods to the difference in time constants between the camera and PMT measurements. Simulation structures were designed to match the light output measured with the camera while providing good agreement with the measured LR resulting in a bulk absorption coefficient of 5 10{sup ?5} ?m{sup ?1}. Conclusions: The combination of experimental measurements for microcolumnar CsI:Tl scintillators using sealed-sources and synchrotron exposures with results obtained via simulation suggests that the time course of the emission might play a role in experimental estimates. The procedure yielded an experimentally derived linear absorption coefficient for microcolumnar Cs:Tl of 5 10{sup ?5} ?m{sup ?1}. To the authors knowledge, this is the first time this parameter has been validated against experimental observations. The measurements also offer insight into the relative role of optical transport on the effective optical yield of the scintillator with microcolumnar structure.

    15. Free energy of RNA-counterion interactions in a tight-binding model computed by a discrete space mapping

      SciTech Connect (OSTI)

      Henke, Paul S.; Mak, Chi H.

      2014-08-14

      The thermodynamic stability of a folded RNA is intricately tied to the counterions and the free energy of this interaction must be accounted for in any realistic RNA simulations. Extending a tight-binding model published previously, in this paper we investigate the fundamental structure of charges arising from the interaction between small functional RNA molecules and divalent ions such as Mg{sup 2+} that are especially conducive to stabilizing folded conformations. The characteristic nature of these charges is utilized to construct a discretely connected energy landscape that is then traversed via a novel application of a deterministic graph search technique. This search method can be incorporated into larger simulations of small RNA molecules and provides a fast and accurate way to calculate the free energy arising from the interactions between an RNA and divalent counterions. The utility of this algorithm is demonstrated within a fully atomistic Monte Carlo simulation of the P4-P6 domain of the Tetrahymena group I intron, in which it is shown that the counterion-mediated free energy conclusively directs folding into a compact structure.

    16. Proposal for grid computing for nuclear applications

      SciTech Connect (OSTI)

      Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat; Muhamad, Shalina Bt. Sheik; Hassan, Hasni; Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri; and others

      2014-02-12

      The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

    17. Radiological Safety Analysis Computer Program

      Energy Science and Technology Software Center (OSTI)

      2001-08-28

      RSAC-6 is the latest version of the RSAC program. It calculates the consequences of a release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory; decay and in-grow the inventory during transport through processes, facilities, and the environment; model the downwind dispersion of the activity; and calculate doses to downwind individuals. Internal dose from the inhalation and ingestion pathways is calculated. External dose from ground surface andmore » plume gamma pathways is calculated. New and exciting updates to the program include the ability to evaluate a release to an enclosed room, resuspension of deposited activity and evaluation of a release up to 1 meter from the release point. Enhanced tools are included for dry deposition, building wake, occupancy factors, respirable fraction, AMAD adjustment, updated and enhanced radionuclide inventory and inclusion of the dose-conversion factors from FOR 11 and 12.« less

    18. Cloud computing security.

      SciTech Connect (OSTI)

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    19. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home › About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and Computational Sciences Directorate Michael Bartell Chief Information Officer Information Technologies Services Division Jim Hack Director, Climate Science Institute National Center for Computational Sciences Shaun Gleason Division Director Computational Sciences and Engineering Barney Maccabe Division Director Computer Science

    20. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes MC-proc.png Compute Node Configuration 6,384 nodes 2 twelve-core AMD 'MagnyCours' 2.1-GHz processors per node (see die image to the right and schematic below) 24 cores per node (153,216 total cores) 32 GB DDR3 1333-MHz memory per node (6,000 nodes) 64 GB DDR3 1333-MHz memory per node (384 nodes) Peak Gflop/s rate: 8.4 Gflops/core 201.6 Gflops/node 1.28 Peta-flops for the entire machine Each core has its own L1 and L2 caches, with 64 KB and 512KB respectively One 6-MB

    1. Dedicated heterogeneous node scheduling including backfill scheduling

      DOE Patents [OSTI]

      Wood, Robert R. (Livermore, CA); Eckert, Philip D. (Livermore, CA); Hommes, Gregg (Pleasanton, CA)

      2006-07-25

      A method and system for job backfill scheduling dedicated heterogeneous nodes in a multi-node computing environment. Heterogeneous nodes are grouped into homogeneous node sub-pools. For each sub-pool, a free node schedule (FNS) is created so that the number of to chart the free nodes over time. For each prioritized job, using the FNS of sub-pools having nodes useable by a particular job, to determine the earliest time range (ETR) capable of running the job. Once determined for a particular job, scheduling the job to run in that ETR. If the ETR determined for a lower priority job (LPJ) has a start time earlier than a higher priority job (HPJ), then the LPJ is scheduled in that ETR if it would not disturb the anticipated start times of any HPJ previously scheduled for a future time. Thus, efficient utilization and throughput of such computing environments may be increased by utilizing resources otherwise remaining idle.

    2. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Resources This page is the repository for sundry items of information relevant to general computing on BooNE. If you have a question or problem that isn't answered here, or a suggestion for improving this page or the information on it, please mail boone-computing@fnal.gov and we'll do our best to address any issues. Note about this page Some links on this page point to www.everything2.com, and are meant to give an idea about a concept or thing without necessarily wading through a whole website

    3. A Research Roadmap for Computation-Based Human Reliability Analysis

      SciTech Connect (OSTI)

      Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

      2015-08-01

      The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

    4. Low-frequency computational electromagnetics for antenna analysis

      SciTech Connect (OSTI)

      Miller, E.K. ); Burke, G.J. )

      1991-01-01

      An overview of low-frequency, computational methods for modeling the electromagnetic characteristics of antennas is presented here. The article presents a brief analytical background, and summarizes the essential ingredients of the method of moments, for numerically solving low-frequency antenna problems. Some extensions to the basic models of perfectly conducting objects in free space are also summarized, followed by a consideration of some of the same computational issues that affect model accuracy, efficiency and utility. A variety of representative computations are then presented to illustrate various modeling aspects and capabilities that are currently available. A fairly extensive bibliography is included to suggest further reference material to the reader. 90 refs., 27 figs.

    5. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    6. Magnetic resonance imaging and computational fluid dynamics (CFD) simulations of rabbit nasal airflows for the development of hybrid CFD/PBPK models

      SciTech Connect (OSTI)

      Corley, Richard A.; Minard, Kevin R.; Kabilan, Senthil; Einstein, Daniel R.; Kuprat, Andrew P.; harkema, J. R.; Kimbell, Julia; Gargas, M. L.; Kinzell, John H.

      2009-06-01

      The percentages of total air?ows over the nasal respiratory and olfactory epithelium of female rabbits were cal-culated from computational ?uid dynamics (CFD) simulations of steady-state inhalation. These air?ow calcula-tions, along with nasal airway geometry determinations, are critical parameters for hybrid CFD/physiologically based pharmacokinetic models that describe the nasal dosimetry of water-soluble or reactive gases and vapors in rabbits. CFD simulations were based upon three-dimensional computational meshes derived from magnetic resonance images of three adult female New Zealand White (NZW) rabbits. In the anterior portion of the nose, the maxillary turbinates of rabbits are considerably more complex than comparable regions in rats, mice, mon-keys, or humans. This leads to a greater surface area to volume ratio in this region and thus the potential for increased extraction of water soluble or reactive gases and vapors in the anterior portion of the nose compared to many other species. Although there was considerable interanimal variability in the ?ne structures of the nasal turbinates and air?ows in the anterior portions of the nose, there was remarkable consistency between rabbits in the percentage of total inspired air?ows that reached the ethmoid turbinate region (~50%) that is presumably lined with olfactory epithelium. These latter results (air?ows reaching the ethmoid turbinate region) were higher than previous published estimates for the male F344 rat (19%) and human (7%). These di?erences in regional air?ows can have signi?cant implications in interspecies extrapolations of nasal dosimetry.

    7. transportation-system-modeling-webinar

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Webinar Announcement Webinar for the Intelligent Transportation Society of the Midwest (ITS Midwest) May 16, 2011 1:00 PM(CST) Hubert Ley Director, TRACC Argonne National Laboratory Argonne, Illinois High Performance Computing in Transportation Research - High Fidelity Transportation Models and More The Role of High-Performance Computing Because ITS relies on a very diverse collection of technologies, including communication and control technologies, advanced computing, information management

    8. Computer code for double beta decay QRPA based calculations

      SciTech Connect (OSTI)

      Barbero, C. A.; Mariano, A.; Krmpoti?, F.; Samana, A. R.; Ferreira, V. dos Santos; Bertulani, C. A.

      2014-11-11

      The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and ?{sup } processes, is extended to include also the nuclear double beta decay.

    9. INSTRUMENTATION, INCLUDING NUCLEAR AND PARTICLE DETECTORS; RADIATION

      Office of Scientific and Technical Information (OSTI)

      interval technical basis document Chiaro, P.J. Jr. 44 INSTRUMENTATION, INCLUDING NUCLEAR AND PARTICLE DETECTORS; RADIATION DETECTORS; RADIATION MONITORS; DOSEMETERS;...

    10. Internode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

      2013-09-03

      Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

    11. Internode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

      2014-02-11

      Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

    12. Sandia Energy - Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science Home Energy Research Advanced Scientific Computing Research (ASCR) Computational Science Computational Sciencecwdd2015-03-26T13:35:2...

    13. Sandia Energy Computational Modeling & Simulation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      new-crew-database-receives-first-set-of-datafeed 0 Aerodynamic Wind-Turbine Blade Design for the National Rotor Testbed http:energy.sandia.govaerodynamic-wind-turbin...

    14. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    15. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Events Computing Events Spotlighting the most advanced scientific and technical applications in the world! Featuring exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies will be seen for the first time in Denver. Supercomputing Conference 13 Denver, Colorado November 17-22, 2013 Spotlighting the most advanced scientific and technical applications in the world, SC13 will bring together the international

    16. Gas storage materials, including hydrogen storage materials

      DOE Patents [OSTI]

      Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

      2014-11-25

      A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material, such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

    17. Gas storage materials, including hydrogen storage materials

      DOE Patents [OSTI]

      Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

      2013-02-19

      A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

    18. Communications circuit including a linear quadratic estimator

      DOE Patents [OSTI]

      Ferguson, Dennis D.

      2015-07-07

      A circuit includes a linear quadratic estimator (LQE) configured to receive a plurality of measurements a signal. The LQE is configured to weight the measurements based on their respective uncertainties to produce weighted averages. The circuit further includes a controller coupled to the LQE and configured to selectively adjust at least one data link parameter associated with a communication channel in response to receiving the weighted averages.

    19. Intentionally Including - Engaging Minorities in Physics Careers |

      Office of Environmental Management (EM)

      Department of Energy Intentionally Including - Engaging Minorities in Physics Careers Intentionally Including - Engaging Minorities in Physics Careers April 24, 2013 - 4:37pm Addthis Joining Director Dot Harris (second from left) were Marlene Kaplan, the Deputy Director of Education and director of EPP, National Oceanic and Atmospheric Administration, Claudia Rankins, a Program Officer with the National Science Foundation and Jim Stith, the past Vice-President of the American Institute of

    20. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Users will be facing increased complexity in the memory subsystem and node architecture. System designs and programming models will have to evolve to face these new...

    1. Argonne's Laboratory Computing Resource Center : 2005 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

      2007-06-30

      Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to develop comprehensive scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has begun developing a 'path forward' plan for additional computing resources.

    2. Scramjet including integrated inlet and combustor

      SciTech Connect (OSTI)

      Kutschenreuter, P.H. Jr.; Blanton, J.C.

      1992-02-04

      This patent describes a scramjet engine. It comprises: a first surface including an aft facing step; a cowl including: a leading edge and a trailing edge; an upper surface and a lower surface extending between the leading edge and the trailing edge; the cowl upper surface being spaced from and generally parallel to the first surface to define an integrated inlet-combustor therebetween having an inlet for receiving and channeling into the inlet-combustor supersonic inlet airflow; means for injecting fuel into the inlet-combustor at the step for mixing with the supersonic inlet airflow for generating supersonic combustion gases; and further including a spaced pari of sidewalls extending between the first surface to the cowl upper surface and wherein the integrated inlet-combustor is generally rectangular and defined by the sidewall pair, the first surface and the cowl upper surface.

    3. Link failure detection in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-11-09

      Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

    4. Broadcasting a message in a parallel computer

      DOE Patents [OSTI]

      Berg, Jeremy E. (Rochester, MN); Faraj, Ahmad A. (Rochester, MN)

      2011-08-02

      Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

    5. Electric Power Monthly, August 1990. [Glossary included

      SciTech Connect (OSTI)

      Not Available

      1990-11-29

      The Electric Power Monthly (EPM) presents monthly summaries of electric utility statistics at the national, Census division, and State level. The purpose of this publication is to provide energy decisionmakers with accurate and timely information that may be used in forming various perspectives on electric issues that lie ahead. Data includes generation by energy source (coal, oil, gas, hydroelectric, and nuclear); generation by region; consumption of fossil fuels for power generation; sales of electric power, cost data; and unusual occurrences. A glossary is included.

    6. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

      DOE Patents [OSTI]

      Faraj, Ahmad (Rochester, MN)

      2012-04-17

      Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

    7. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Jefferson Lab Jefferson Lab Home Search Contact JLab Computing at JLab ---------------------- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org

    8. Controlling data transfers from an origin compute node to a target compute node

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2011-06-21

      Methods, apparatus, and products are disclosed for controlling data transfers from an origin compute node to a target compute node that include: receiving, by an application messaging module on the target compute node, an indication of a data transfer from an origin compute node to the target compute node; and administering, by the application messaging module on the target compute node, the data transfer using one or more messaging primitives of a system messaging module in dependence upon the indication.

    9. Assessment of Molecular Modeling & Simulation

      SciTech Connect (OSTI)

      2002-01-03

      This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.

    10. Model Analysis ToolKit

      Energy Science and Technology Software Center (OSTI)

      2015-05-15

      MATK provides basic functionality to facilitate model analysis within the Python computational environment. Model analysis setup within MATK includes: - define parameters - define observations - define model (python function) - define samplesets (sets of parameter combinations) Currently supported functionality includes: - forward model runs - Latin-Hypercube sampling of parameters - multi-dimensional parameter studies - parallel execution of parameter samples - model calibration using internal Levenberg-Marquardt algorithm - model calibration using lmfit package - modelmore » calibration using levmar package - Markov Chain Monte Carlo using pymc package MATK facilitates model analysis using: - scipy - calibration (scipy.optimize) - rpy2 - Python interface to R« less

    11. Computational Tools to Accelerate Commercial Development

      SciTech Connect (OSTI)

      Miller, David C.

      2013-01-01

      The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

    12. How Do You Reduce Energy Use from Computers and Electronics?...

      Broader source: Energy.gov (indexed) [DOE]

      discussed some ways to reduce the energy used by computers and electronics. Some tips include ensuring your computer is configured for optimal energy savings, turning off devices...

    13. Subterranean barriers including at least one weld

      DOE Patents [OSTI]

      Nickelson, Reva A.; Sloan, Paul A.; Richardson, John G.; Walsh, Stephanie; Kostelnik, Kevin M.

      2007-01-09

      A subterranean barrier and method for forming same are disclosed, the barrier including a plurality of casing strings wherein at least one casing string of the plurality of casing strings may be affixed to at least another adjacent casing string of the plurality of casing strings through at least one weld, at least one adhesive joint, or both. A method and system for nondestructively inspecting a subterranean barrier is disclosed. For instance, a radiographic signal may be emitted from within a casing string toward an adjacent casing string and the radiographic signal may be detected from within the adjacent casing string. A method of repairing a barrier including removing at least a portion of a casing string and welding a repair element within the casing string is disclosed. A method of selectively heating at least one casing string forming at least a portion of a subterranean barrier is disclosed.

    14. Photoactive devices including porphyrinoids with coordinating additives

      DOE Patents [OSTI]

      Forrest, Stephen R; Zimmerman, Jeramy; Yu, Eric K; Thompson, Mark E; Trinh, Cong; Whited, Matthew; Diev, Vlacheslav

      2015-05-12

      Coordinating additives are included in porphyrinoid-based materials to promote intermolecular organization and improve one or more photoelectric characteristics of the materials. The coordinating additives are selected from fullerene compounds and organic compounds having free electron pairs. Combinations of different coordinating additives can be used to tailor the characteristic properties of such porphyrinoid-based materials, including porphyrin oligomers. Bidentate ligands are one type of coordinating additive that can form coordination bonds with a central metal ion of two different porphyrinoid compounds to promote porphyrinoid alignment and/or pi-stacking. The coordinating additives can shift the absorption spectrum of a photoactive material toward higher wavelengths, increase the external quantum efficiency of the material, or both.

    15. Determination Of Ph Including Hemoglobin Correction

      DOE Patents [OSTI]

      Maynard, John D. (Albuquerque, NM); Hendee, Shonn P. (Albuquerque, NM); Rohrscheib, Mark R. (Albuquerque, NM); Nunez, David (Albuquerque, NM); Alam, M. Kathleen (Cedar Crest, NM); Franke, James E. (Franklin, TN); Kemeny, Gabor J. (Madison, WI)

      2005-09-13

      Methods and apparatuses of determining the pH of a sample. A method can comprise determining an infrared spectrum of the sample, and determining the hemoglobin concentration of the sample. The hemoglobin concentration and the infrared spectrum can then be used to determine the pH of the sample. In some embodiments, the hemoglobin concentration can be used to select an model relating infrared spectra to pH that is applicable at the determined hemoglobin concentration. In other embodiments, a model relating hemoglobin concentration and infrared spectra to pH can be used. An apparatus according to the present invention can comprise an illumination system, adapted to supply radiation to a sample; a collection system, adapted to collect radiation expressed from the sample responsive to the incident radiation; and an analysis system, adapted to relate information about the incident radiation, the expressed radiation, and the hemoglobin concentration of the sample to pH.

    16. Electric power monthly, September 1990. [Glossary included

      SciTech Connect (OSTI)

      Not Available

      1990-12-17

      The purpose of this report is to provide energy decision makers with accurate and timely information that may be used in forming various perspectives on electric issues. The power plants considered include coal, petroleum, natural gas, hydroelectric, and nuclear power plants. Data are presented for power generation, fuel consumption, fuel receipts and cost, sales of electricity, and unusual occurrences at power plants. Data are compared at the national, Census division, and state levels. 4 figs., 52 tabs. (CK)

    17. Power generation method including membrane separation

      DOE Patents [OSTI]

      Lokhandwala, Kaaeid A. (Union City, CA)

      2000-01-01

      A method for generating electric power, such as at, or close to, natural gas fields. The method includes conditioning natural gas containing C.sub.3+ hydrocarbons and/or acid gas by means of a membrane separation step. This step creates a leaner, sweeter, drier gas, which is then used as combustion fuel to run a turbine, which is in turn used for power generation.

    18. Nuclear reactor shield including magnesium oxide

      DOE Patents [OSTI]

      Rouse, Carl A. (Del Mar, CA); Simnad, Massoud T. (La Jolla, CA)

      1981-01-01

      An improvement in nuclear reactor shielding of a type used in reactor applications involving significant amounts of fast neutron flux, the reactor shielding including means providing structural support, neutron moderator material, neutron absorber material and other components as described below, wherein at least a portion of the neutron moderator material is magnesium in the form of magnesium oxide either alone or in combination with other moderator materials such as graphite and iron.

    19. Rotor assembly including superconducting magnetic coil

      DOE Patents [OSTI]

      Snitchler, Gregory L. (Shrewsbury, MA); Gamble, Bruce B. (Wellesley, MA); Voccio, John P. (Somerville, MA)

      2003-01-01

      Superconducting coils and methods of manufacture include a superconductor tape wound concentrically about and disposed along an axis of the coil to define an opening having a dimension which gradually decreases, in the direction along the axis, from a first end to a second end of the coil. Each turn of the superconductor tape has a broad surface maintained substantially parallel to the axis of the coil.

    20. Programming models

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Task-based models Task-based models and abstractions (such as offered by CHARM++, Legion and HPX, for example) offer many attractive features for mapping computations onto...

    1. Multicore Challenges and Benefits for High Performance Scientific Computing

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Nielsen, Ida M.B.; Janssen, Curtis L.

      2008-01-01

      Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less

    2. Distributed computing for signal processing: modeling of asynchronous parallel computation. Appendix C. Fault-tolerant interconnection networks and image-processing applications for the PASM parallel processing systems. Final report

      SciTech Connect (OSTI)

      Adams, G.B.

      1984-12-01

      The demand for very-high-speed data processing coupled with falling hardware costs has made large-scale parallel and distributed computer systems both desirable and feasible. Two modes of parallel processing are single-instruction stream-multiple data stream (SIMD) and multiple instruction stream - multiple data stream (MIMD). PASM, a partitionable SIMD/MIMD system, is a reconfigurable multimicroprocessor system being designed for image processing and pattern recognition. An important component of these systems is the interconnection network, the mechanism for communication among the computation nodes and memories. Assuring high reliability for such complex systems is a significant task. Thus, a crucial practical aspect of an interconnection network is fault tolerance. In answer to this need, the Extra Stage Cube (ESC), a fault-tolerant, multistage cube-type interconnection network, is defined. The fault tolerance of the ESC is explored for both single and multiple faults, routing tags are defined, and consideration is given to permuting data and partitioning the ESC in the presence of faults. The ESC is compared with other fault-tolerant multistage networks. Finally, reliability of the ESC and an enhanced version of it are investigated.

    3. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    4. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      User Defined Images Archive APEX Home R & D Exascale Computing CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in...

    5. Optical panel system including stackable waveguides

      DOE Patents [OSTI]

      DeSanto, Leonard; Veligdan, James T.

      2007-03-06

      An optical panel system including stackable waveguides is provided. The optical panel system displays a projected light image and comprises a plurality of planar optical waveguides in a stacked state. The optical panel system further comprises a support system that aligns and supports the waveguides in the stacked state. In one embodiment, the support system comprises at least one rod, wherein each waveguide contains at least one hole, and wherein each rod is positioned through a corresponding hole in each waveguide. In another embodiment, the support system comprises at least two opposing edge structures having the waveguides positioned therebetween, wherein each opposing edge structure contains a mating surface, wherein opposite edges of each waveguide contain mating surfaces which are complementary to the mating surfaces of the opposing edge structures, and wherein each mating surface of the opposing edge structures engages a corresponding complementary mating surface of the opposite edges of each waveguide.

    6. Optical panel system including stackable waveguides

      DOE Patents [OSTI]

      DeSanto, Leonard (Dunkirk, MD); Veligdan, James T. (Manorville, NY)

      2007-11-20

      An optical panel system including stackable waveguides is provided. The optical panel system displays a projected light image and comprises a plurality of planar optical waveguides in a stacked state. The optical panel system further comprises a support system that aligns and supports the waveguides in the stacked state. In one embodiment, the support system comprises at least one rod, wherein each waveguide contains at least one hole, and wherein each rod is positioned through a corresponding hole in each waveguide. In another embodiment, the support system comprises at least two opposing edge structures having the waveguides positioned therebetween, wherein each opposing edge structure contains a mating surface, wherein opposite edges of each waveguide contain mating surfaces which are complementary to the mating surfaces of the opposing edge structures, and wherein each mating surface of the opposing edge structures engages a corresponding complementary mating surface of the opposite edges of each waveguide.

    7. Thermovoltaic semiconductor device including a plasma filter

      DOE Patents [OSTI]

      Baldasaro, Paul F. (Clifton Park, NY)

      1999-01-01

      A thermovoltaic energy conversion device and related method for converting thermal energy into an electrical potential. An interference filter is provided on a semiconductor thermovoltaic cell to pre-filter black body radiation. The semiconductor thermovoltaic cell includes a P/N junction supported on a substrate which converts incident thermal energy below the semiconductor junction band gap into electrical potential. The semiconductor substrate is doped to provide a plasma filter which reflects back energy having a wavelength which is above the band gap and which is ineffectively filtered by the interference filter, through the P/N junction to the source of radiation thereby avoiding parasitic absorption of the unusable portion of the thermal radiation energy.

    8. Drapery assembly including insulated drapery liner

      DOE Patents [OSTI]

      Cukierski, Gwendolyn (Ithaca, NY)

      1983-01-01

      A drapery assembly is disclosed for covering a framed wall opening, the assembly including drapery panels hung on a horizontal traverse rod, the rod having a pair of master slides and means for displacing the master slides between open and closed positions. A pair of insulating liner panels are positioned behind the drapery, the remote side edges of the liner panels being connected with the side portions of the opening frame, and the adjacent side edges of the liner panels being connected with a pair of vertically arranged center support members adapted for sliding movement longitudinally of a horizontal track member secured to the upper horizontal portion of the opening frame. Pivotally arranged brackets connect the center support members with the master slides of the traverse rod whereby movement of the master slides to effect opening and closing of the drapery panels effects simultaneous opening and closing of the liner panels.

    9. Engine lubrication circuit including two pumps

      DOE Patents [OSTI]

      Lane, William H.

      2006-10-03

      A lubrication pump coupled to the engine is sized such that the it can supply the engine with a predetermined flow volume as soon as the engine reaches a peak torque engine speed. In engines that operate predominately at speeds above the peak torque engine speed, the lubrication pump is often producing lubrication fluid in excess of the predetermined flow volume that is bypassed back to a lubrication fluid source. This arguably results in wasted power. In order to more efficiently lubricate an engine, a lubrication circuit includes a lubrication pump and a variable delivery pump. The lubrication pump is operably coupled to the engine, and the variable delivery pump is in communication with a pump output controller that is operable to vary a lubrication fluid output from the variable delivery pump as a function of at least one of engine speed and lubrication flow volume or system pressure. Thus, the lubrication pump can be sized to produce the predetermined flow volume at a speed range at which the engine predominately operates while the variable delivery pump can supplement lubrication fluid delivery from the lubrication pump at engine speeds below the predominant engine speed range.

    10. Modeling

      SciTech Connect (OSTI)

      Loth, E.; Tryggvason, G.; Tsuji, Y.; Elghobashi, S. E.; Crowe, Clayton T.; Berlemont, A.; Reeks, M.; Simonin, O.; Frank, Th; Onishi, Yasuo; Van Wachem, B.

      2005-09-01

      Slurry flows occur in many circumstances, including chemical manufacturing processes, pipeline transfer of coal, sand, and minerals; mud flows; and disposal of dredged materials. In this section we discuss slurry flow applications related to radioactive waste management. The Hanford tank waste solids and interstitial liquids will be mixed to form a slurry so it can be pumped out for retrieval and treatment. The waste is very complex chemically and physically. The ARIEL code is used to model the chemical interactions and fluid dynamics of the waste.

    11. Comparison of Hydrodynamic Load Predictions Between Engineering Models and Computational Fluid Dynamics for the OC4-DeepCwind Semi-Submersible: Preprint

      SciTech Connect (OSTI)

      Benitz, M. A.; Schmidt, D. P.; Lackner, M. A.; Stewart, G. M.; Jonkman, J.; Robertson, A.

      2014-09-01

      Hydrodynamic loads on the platforms of floating offshore wind turbines are often predicted with computer-aided engineering tools that employ Morison's equation and/or potential-flow theory. This work compares results from one such tool, FAST, NREL's wind turbine computer-aided engineering tool, and the computational fluid dynamics package, OpenFOAM, for the OC4-DeepCwind semi-submersible analyzed in the International Energy Agency Wind Task 30 project. Load predictions from HydroDyn, the offshore hydrodynamics module of FAST, are compared with high-fidelity results from OpenFOAM. HydroDyn uses a combination of Morison's equations and potential flow to predict the hydrodynamic forces on the structure. The implications of the assumptions in HydroDyn are evaluated based on this code-to-code comparison.

    12. Radiation Detection Computational Benchmark Scenarios

      SciTech Connect (OSTI)

      Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

      2013-09-24

      Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for compilation. This is a report describing the details of the selected Benchmarks and results from various transport codes.

    13. Numerical uncertainty in computational engineering and physics

      SciTech Connect (OSTI)

      Hemez, Francois M

      2009-01-01

      Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts of consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.

    14. Effects of Hot Streak and Phantom Cooling on Heat Transfer in a Cooled Turbine Stage Including Particulate Deposition

      SciTech Connect (OSTI)

      Bons, Jeffrey; Ameri, Ali

      2015-09-30

      The objective of this research effort was to develop a validated computational modeling capability for the characterization of the effects of hot streaks and particulate deposition on the heat load of modern gas turbines. This was accomplished with a multi-faceted approach including analytical, experimental, and computational components. A 1-year no cost extension request was approved for this effort, so the total duration was 4 years. The research effort succeeded in its ultimate objective by leveraging extensive experimental deposition studies complemented by computational modeling. Experiments were conducted with hot streaks, vane cooling, and combinations of hot streaks with vane cooling. These studies contributed to a significant body of corporate knowledge of deposition, in combination with particle rebound and deposition studies funded by other agencies, to provide suitable conditions for the development of a new model. The model includes the following physical phenomena: elastic deformation, plastic deformation, adhesion, and shear removal. It also incorporates material property sensitivity to temperature and tangential-normal velocity rebound cross-dependencies observed in experiments. The model is well-suited for incorporation in CFD simulations of complex gas turbine flows due to its algebraic (explicit) formulation. This report contains model predictions compared to coefficient of restitution data available in the open literature as well as deposition results from two different high temperature turbine deposition facilities. While the model comparisons with experiments are in many cases promising, several key aspects of particle deposition remain elusive. The simple phenomenological nature of the model allows for parametric dependencies to be evaluated in a straightforward manner. This effort also included the first-ever full turbine stage deposition model published in the open literature. The simulations included hot streaks and simulated vane cooling. The new deposition model was implemented into the CFD model as a wall boundary condition, with various particle sizes investigated in the simulation. Simulations utilizing a steady mixing plane formulation and an unsteady sliding mesh were conducted and the flow solution of each was validated against experimental data. Results from each of these simulations, including impact and capture distributions and efficiencies, were compared and potential reasons for differences discussed in detail. The inclusion of a large range of particle sizes allowed investigation of trends with particle size, such as increased radial migration and reduced sticking efficiency at the larger particle sizes. The unsteady simulation predicted lower sticking efficiencies on the rotor blades than the mixing plane simulation for the majority of particle sizes. This is postulated to be due to the preservation of the hot streak and cool vane wake through the vane-rotor interface (which are smeared out circumferentially in the mixing-plane simulation). The results reported here represent the successful implementation of a novel deposition model into validated vane-rotor flow solutions that include a non-uniform inlet temperature profile and simulated vane cooling.

    15. Computing at the leading edge: Research in the energy sciences

      SciTech Connect (OSTI)

      Mirin, A.A.; Van Dyke, P.T.

      1994-02-01

      The purpose of this publication is to highlight selected scientific challenges that have been undertaken by the DOE Energy Research community. The high quality of the research reflected in these contributions underscores the growing importance both to the Grand Challenge scientific efforts sponsored by DOE and of the related supporting technologies that the National Energy Research Supercomputer Center (NERSC) and other facilities are able to provide. The continued improvement of the computing resources available to DOE scientists is prerequisite to ensuring their future progress in solving the Grand Challenges. Titles of articles included in this publication include: the numerical tokamak project; static and animated molecular views of a tumorigenic chemical bound to DNA; toward a high-performance climate systems model; modeling molecular processes in the environment; lattice Boltzmann models for flow in porous media; parallel algorithms for modeling superconductors; parallel computing at the Superconducting Super Collider Laboratory; the advanced combustion modeling environment; adaptive methodologies for computational fluid dynamics; lattice simulations of quantum chromodynamics; simulating high-intensity charged-particle beams for the design of high-power accelerators; electronic structure and phase stability of random alloys.

    16. Computational Sciences and Engineering Division

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The Computational Sciences and Engineering Division is a major research division at the Department of Energy's Oak Ridge National Laboratory. CSED develops and applies creative information technology and modeling and simulation research solutions for National Security and National Energy Infrastructure needs. The mission of the Computational Sciences and Engineering Division is to enhance the country's capabilities in achieving important objectives in the areas of national defense, homeland

    17. [Article 1 of 7: Motivates and Includes the Consumer]

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 of 7: Research on the Characteristics of a Modern Grid by the NETL Modern Grid Strategy Team Accommodates All Generation and Storage Options Last month we presented the first Principal Characteristic of a Modern Grid, "Motivates and Includes the Consumer". This month we present a second characteristic, "Accommodates All Generation and Storage Options". This characteristic will fundamentally transition today's grid from a centralized model for generation to one that also has

    18. Ionic liquids, electrolyte solutions including the ionic liquids, and energy storage devices including the ionic liquids

      DOE Patents [OSTI]

      Gering, Kevin L.; Harrup, Mason K.; Rollins, Harry W.

      2015-12-08

      An ionic liquid including a phosphazene compound that has a plurality of phosphorus-nitrogen units and at least one pendant group bonded to each phosphorus atom of the plurality of phosphorus-nitrogen units. One pendant group of the at least one pendant group comprises a positively charged pendant group. Additional embodiments of ionic liquids are disclosed, as are electrolyte solutions and energy storage devices including the embodiments of the ionic liquid.

    19. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost® 3 on the TRACC Cluster Oct. 21-22, 2010 Argonne TRACC Dr. Cezary Bojanowski Dr. Ronald F. Kulak This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small The LS-PrePost Introductory Course was held October 21-22, 2010 at TRACC in West Chicago with interactive participation on-site as well as remotely via the Internet. Intended primarily for finite element analysts with

    20. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      In the early 2000s, members of Fermilab's Computing Division looked ahead to experiments like those at the Large Hadron Collider, which would collect more data than any computing ...

    1. Development of computer graphics

      SciTech Connect (OSTI)

      Nuttall, H.E.

      1989-07-01

      The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

    2. Mira Computational Readiness Assessment | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Mira Computational Readiness Assessment Assess your project's computational readiness for Mira A review of the following computational readiness points in relation to scaling, porting, I/O, memory

    3. Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer Modeling and Analysis Resources PDF icon catcumminscomputingsuccessstorydec2015...

    4. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      applications in the warm dense matter NDCX experiment Wangyi Liu 1 , John Barnard 2 , Alex Friedman 2 , Nathan Masters 2 , Aaron Fisher 2 , Velemir Mlaker 2 , Alice Koniges 2 , David Eder 2 1 LBNL, USA, 2 LLNL, USA This work was part of the Petascale Initiative in Computational Science at NERSC, supported by the Director, Office of Science, Advanced Scientific Computing Research, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. NERSC provided computational resources. Work

    5. CAD-centric Computation Management System for a Virtual TBM

      SciTech Connect (OSTI)

      Ramakanth Munipalli; K.Y. Szema; P.Y. Huang; C.M. Rowell; A.Ying; M. Abdou

      2011-05-03

      HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of the analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.

    6. Sandia Energy - Computations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computations Home Transportation Energy Predictive Simulation of Engines Reacting Flow Applied Math & Software Computations ComputationsAshley Otero2015-10-30T02:18:51+00:00...

    7. High-Performance Computing for Advanced Smart Grid Applications

      SciTech Connect (OSTI)

      Huang, Zhenyu; Chen, Yousu

      2012-07-06

      The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

    8. Predicting age of ovarian failure after radiation to a field that includes the ovaries

      SciTech Connect (OSTI)

      Wallace, W. Hamish B. . E-mail: Hamish.Wallace@ed.ac.uk; Thomson, Angela B.; Saran, Frank; Kelsey, Tom W.

      2005-07-01

      Purpose: To predict the age at which ovarian failure is likely to develop after radiation to a field that includes the ovary in women treated for cancer. Methods and Materials: Modern computed tomography radiotherapy planning allows determination of the effective dose of radiation received by the ovaries. Together with our recent assessment of the radiosensitivity of the human oocyte, the effective surviving fraction of primordial oocytes can be determined and the age of ovarian failure, with 95% confidence limits, predicted for any given dose of radiotherapy. Results: The effective sterilizing dose (ESD: dose of fractionated radiotherapy [Gy] at which premature ovarian failure occurs immediately after treatment in 97.5% of patients) decreases with increasing age at treatment. ESD at birth is 20.3 Gy; at 10 years 18.4 Gy, at 20 years 16.5 Gy, and at 30 years 14.3 Gy. We have calculated 95% confidence limits for age at premature ovarian failure for estimated radiation doses to the ovary from 1 Gy to the ESD from birth to 50 years. Conclusions: We report the first model to reliably predict the age of ovarian failure after treatment with a known dose of radiotherapy. Clinical application of this model will enable physicians to counsel women on their reproductive potential following successful treatment.

    9. Locating hardware faults in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-04-13

      Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

    10. Impact analysis on a massively parallel computer

      SciTech Connect (OSTI)

      Zacharia, T.; Aramayo, G.A.

      1994-06-01

      Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper.

    11. A Systematic Comprehensive Computational Model for Stake Estimation in Mission Assurance: Applying Cyber Security Econometrics System (CSES) to Mission Assurance Analysis Protocol (MAAP)

      SciTech Connect (OSTI)

      Abercrombie, Robert K; Sheldon, Frederick T; Grimaila, Michael R

      2010-01-01

      In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper, we discuss how this infrastructure can be used in the subject domain of mission assurance as defined as the full life-cycle engineering process to identify and mitigate design, production, test, and field support deficiencies of mission success. We address the opportunity to apply the Cyberspace Security Econometrics System (CSES) to Carnegie Mellon University and Software Engineering Institute s Mission Assurance Analysis Protocol (MAAP) in this context.

    12. Coupling of Mechanical Behavior of Cell Components to Electrochemical-Thermal Models for Computer-Aided Engineering of Batteries under Abuse (Presentation)

      SciTech Connect (OSTI)

      Pesaran, A.; Wierzbicki, T.; Sahraei, E.; Li, G.; Collins, L.; Sprague, M.; Kim, G. H.; Santhangopalan, S.

      2014-06-01

      The EV Everywhere Grand Challenge aims to produce plug-in electric vehicles as affordable and convenient for the American family as gasoline-powered vehicles by 2022. Among the requirements set by the challenge, electric vehicles must be as safe as conventional vehicles, and EV batteries must not lead to unsafe situations under abuse conditions. NREL's project started in October 2013, based on a proposal in response to the January 2013 DOE VTO FOA, with the goal of developing computer aided engineering tools to accelerate the development of safer lithium ion batteries.

    13. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    14. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

    15. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    16. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or...

    17. Measurements and computations of room airflow with displacement ventilation

      SciTech Connect (OSTI)

      Yuan, X.; Chen, Q.; Glicksman, L.R.; Hu, Y.; Yang, X.

      1999-07-01

      This paper presents a set of detailed experimental data of room airflow with displacement ventilation. These data were obtained from a new environmental test facility. The measurements were conducted for three typical room configurations: a small office, a large office with partitions, and a classroom. The distributions of air velocity, air velocity fluctuation, and air temperature were measured by omnidirectional hot-sphere anemometers, and contaminant concentrations were measured by tracer gas at 54 points in the rooms. Smoke was used to observe airflow. The data also include the wall surface temperature distribution, air supply parameters, and the age of air at several locations in the rooms. A computational fluid dynamics (CFD) program with the Re-Normalization Group (RNG) {kappa}-{epsilon} model was also used to predict the indoor airflow. The agreement between the computed results and measured data of air temperature and velocity is good. However, some discrepancies exist in the computed and measured concentrations and velocity fluctuation.

    18. RAMONA-4B a computer code with three-dimensional neutron kinetics for BWR and SBWR system transient - models and correlations

      SciTech Connect (OSTI)

      Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.; Mallen, A.N.; Neymotin, L.Y.

      1998-03-01

      This document describes the major modifications and improvements made to the modeling of the RAMONA-3B/MOD0 code since 1981, when the code description and assessment report was completed. The new version of the code is RAMONA-4B. RAMONA-4B is a systems transient code for application to different versions of Boiling Water Reactors (BWR) such as the current BWR, the Advanced Boiling Water Reactor (ABWR), and the Simplified Boiling Water Reactor (SBWR). This code uses a three-dimensional neutron kinetics model coupled with a multichannel, non-equilibrium, drift-flux, two-phase flow formulation of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients and instability issues. Chapter 1 is an overview of the code`s capabilities and limitations; Chapter 2 discusses the neutron kinetics modeling and the implementation of reactivity edits. Chapter 3 is an overview of the heat conduction calculations. Chapter 4 presents modifications to the thermal-hydraulics model of the vessel, recirculation loop, steam separators, boron transport, and SBWR specific components. Chapter 5 describes modeling of the plant control and safety systems. Chapter 6 presents and modeling of Balance of Plant (BOP). Chapter 7 describes the mechanistic containment model in the code. The content of this report is complementary to the RAMONA-3B code description and assessment document. 53 refs., 81 figs., 13 tabs.

    19. Numerical computation of Pop plot

      SciTech Connect (OSTI)

      Menikoff, Ralph

      2015-03-23

      The Pop plot — distance-of-run to detonation versus initial shock pressure — is a key characterization of shock initiation in a heterogeneous explosive. Reactive burn models for high explosives (HE) must reproduce the experimental Pop plot to have any chance of accurately predicting shock initiation phenomena. This report describes a methodology for automating the computation of a Pop plot for a specific explosive with a given HE model. Illustrative examples of the computation are shown for PBX 9502 with three burn models (SURF, WSD and Forest Fire) utilizing the xRage code, which is the Eulerian ASC hydrocode at LANL. Comparison of the numerical and experimental Pop plot can be the basis for a validation test or as an aid in calibrating the burn rate of an HE model. Issues with calibration are discussed.

    20. Discussion: the design and analysis of the Gaussian process model

      SciTech Connect (OSTI)

      Williams, Brian J; Loeppky, Jason L

      2008-01-01

      The investigation of complex physical systems utilizing sophisticated computer models has become commonplace with the advent of modern computational facilities. In many applications, experimental data on the physical systems of interest is extremely expensive to obtain and hence is available in limited quantities. The mathematical systems implemented by the computer models often include parameters having uncertain values. This article provides an overview of statistical methodology for calibrating uncertain parameters to experimental data. This approach assumes that prior knowledge about such parameters is represented as a probability distribution, and the experimental data is used to refine our knowledge about these parameters, expressed as a posterior distribution. Uncertainty quantification for computer model predictions of the physical system are based fundamentally on this posterior distribution. Computer models are generally not perfect representations of reality for a variety of reasons, such as inadequacies in the physical modeling of some processes in the dynamic system. The statistical model includes components that identify and adjust for such discrepancies. A standard approach to statistical modeling of computer model output for unsampled inputs is introduced for the common situation where limited computer model runs are available. Extensions of the statistical methods to functional outputs are available and discussed briefly.

    1. C -parameter distribution at N 3 LL ' including power corrections

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Hoang, André H.; Kolodrubetz, Daniel W.; Mateu, Vicent; Stewart, Iain W.

      2015-05-15

      We compute the e⁺e⁻ C-parameter distribution using the soft-collinear effective theory with a resummation to next-to-next-to-next-to-leading-log prime accuracy of the most singular partonic terms. This includes the known fixed-order QCD results up to O(α3s), a numerical determination of the two-loop nonlogarithmic term of the soft function, and all logarithmic terms in the jet and soft functions up to three loops. Our result holds for C in the peak, tail, and far tail regions. Additionally, we treat hadronization effects using a field theoretic nonperturbative soft function, with moments Ωn. To eliminate an O(ΛQCD) renormalon ambiguity in the soft function, we switchmore » from the MS¯ to a short distance “Rgap” scheme to define the leading power correction parameter Ω1. We show how to simultaneously account for running effects in Ω1 due to renormalon subtractions and hadron-mass effects, enabling power correction universality between C-parameter and thrust to be tested in our setup. We discuss in detail the impact of resummation and renormalon subtractions on the convergence. In the relevant fit region for αs(mZ) and Ω1, the perturbative uncertainty in our cross section is ≅ 2.5% at Q=mZ.« less

    2. New challenges in computational biochemistry

      SciTech Connect (OSTI)

      Honig, B.

      1996-12-31

      The new challenges in computational biochemistry to which the title refers include the prediction of the relative binding free energy of different substrates to the same protein, conformational sampling, and other examples of theoretical predictions matching known protein structure and behavior.

    3. Experimental Mathematics and Computational Statistics

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2009-04-30

      The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

    4. Insights on the binding of thioflavin derivative markers to amyloid fibril models and A?{sub 1-40} fibrils from computational approaches

      SciTech Connect (OSTI)

      Al-Torres, Jorge; Rimola, Albert; Sodupe, Mariona; Rodriguez-Rodrguez, Cristina

      2014-10-06

      The present contribution analyzes the binding of ThT and neutral ThT derivatives to a ?-sheet model by means of quantum chemical calculations. In addition, we study the properties of four molecules: (2-(2-hydroxyphenyl)benzoxazole (HBX), 2-(2-hydroxyphenyl)benzothiazole (HBT) and their respective iodinated compounds, HBXI and HBTI, in binding to amyloid fibril models and A?{sub 1-40}fibrils by using a combination of docking, molecular dynamics and quantum mechanics calculations.

    5. Cosmic Reionization On Computers | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      its Cosmic Reionization On Computers (CROC) project, using the Adaptive Refinement Tree (ART) code as its main simulation tool. An important objective of this research is to make...

    6. Computing and Computational Sciences Directorate - Information...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      cost-effective, state-of-the-art computing capabilities for research and development. ... communicates and manages strategy, policy and finance across the portfolio of IT assets. ...

    7. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zrich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    8. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    9. Computing for Finance

      SciTech Connect (OSTI)

      2010-03-24

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    10. Performing a global barrier operation in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

      2014-12-09

      Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.

    11. Combinatorial evaluation of systems including decomposition of a system representation into fundamental cycles

      DOE Patents [OSTI]

      Oliveira, Joseph S. (Richland, WA); Jones-Oliveira, Janet B. (Richland, WA); Bailey, Colin G. (Wellington, NZ); Gull, Dean W. (Seattle, WA)

      2008-07-01

      One embodiment of the present invention includes a computer operable to represent a physical system with a graphical data structure corresponding to a matroid. The graphical data structure corresponds to a number of vertices and a number of edges that each correspond to two of the vertices. The computer is further operable to define a closed pathway arrangement with the graphical data structure and identify each different one of a number of fundamental cycles by evaluating a different respective one of the edges with a spanning tree representation. The fundamental cycles each include three or more of the vertices.

    12. Computers-BSA.ppt

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers! Boy Scout Troop 405! What is a computer?! Is this a computer?! Charles Babbage: Father of the Computer! 1830s Designed mechanical calculators to reduce human error. *Input device *Memory to store instructions and results *A processors *Output device! Vacuum Tube! Edison 1883 & Lee de Forest 1906 discovered that "vacuum tubes" could serve as electrical switches and amplifiers A switch can be ON (1)" or OFF (0) Electronic computers use Boolean (George Bool 1850) logic

    13. Directory of Energy Information Administration Models 1993

      SciTech Connect (OSTI)

      Not Available

      1993-07-06

      This directory contains descriptions about each model, including the title, acronym, purpose, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. Included in this directory are 35 EIA models active as of May 1, 1993. Models that run on personal computers are identified by ``PC`` as part of the acronym. EIA is developing new models, a National Energy Modeling System (NEMS), and is making changes to existing models to include new technologies, environmental issues, conservation, and renewables, as well as extend forecast horizon. Other parts of the Department are involved in this modeling effort. A fully operational model is planned which will integrate completed segments of NEMS for its first official application--preparation of EIA`s Annual Energy Outlook 1994. Abstracts for the new models will be included in next year`s version of this directory.

    14. Theory & Computation > Research > The Energy Materials Center...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Theory & Computation In This Section Computation & Simulation Theory & Computation Computation & Simulation...

    15. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

      SciTech Connect (OSTI)

      Bland, Arthur S Buddy; Hack, James J; Baker, Ann E; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; White, Julia C

      2010-08-01

      Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.

    16. Accounts Policy | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Accounts Policy All holders of user accounts must abide by all appropriate Argonne Leadership Computing Facility and Argonne National Laboratory computing usage policies. These are described at the time of the account request and include requirements such as using a sufficiently strong password, appropriate use of the system, and so on. Any user not following these requirements will have their account disabled. Furthermore, ALCF resources are intended to be used as a computing resource for

    17. Computer Networking Group | Stanford Synchrotron Radiation Lightsource

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Networking Group Do you need help? For assistance please submit a CNG Help Request ticket. CNG Logo Chris Ramirez SSRL Computer and Networking Group (650) 926-2901 | email Jerry Camuso SSRL Computer and Networking Group (650) 926-2994 | email Networking Support The Networking group provides connectivity and communications services for SSRL. The services provided by the Networking Support Group include: Local Area Network support for cable and wireless connectivity. Installation and

    18. Accelerating Battery Design Using Computer-Aided Engineering Tools: Preprint

      SciTech Connect (OSTI)

      Pesaran, A.; Heon, G. H.; Smith, K.

      2011-01-01

      Computer-aided engineering (CAE) is a proven pathway, especially in the automotive industry, to improve performance by resolving the relevant physics in complex systems, shortening the product development design cycle, thus reducing cost, and providing an efficient way to evaluate parameters for robust designs. Academic models include the relevant physics details, but neglect engineering complexities. Industry models include the relevant macroscopic geometry and system conditions, but simplify the fundamental physics too much. Most of the CAE battery tools for in-house use are custom model codes and require expert users. There is a need to make these battery modeling and design tools more accessible to end users such as battery developers, pack integrators, and vehicle makers. Developing integrated and physics-based CAE battery tools can reduce the design, build, test, break, re-design, re-build, and re-test cycle and help lower costs. NREL has been involved in developing various models to predict the thermal and electrochemical performance of large-format cells and has used in commercial three-dimensional finite-element analysis and computational fluid dynamics to study battery pack thermal issues. These NREL cell and pack design tools can be integrated to help support the automotive industry and to accelerate battery design.

    19. USING 3D COMPUTER MODELING, BOREHOLE GEOPHYSICS, AND HIGH CAPACITY PUMPS TO RESTORE PRODUCTION TO MARGINAL WELLS IN THE EAST TEXAS FIELD

      SciTech Connect (OSTI)

      R.L. Bassett

      2003-06-09

      Methods for extending the productive life of marginal wells in the East Texas Field were investigated using advanced computer imaging technology, geophysical tools, and selective perforation of existing wells. Funding was provided by the Department of Energy, TENECO Energy and Schlumberger Wireline and Testing. Drillers' logs for more than 100 wells in proximity to the project lease were acquired, converted to digital format using a numerical scheme, and the data were used to create a 3 Dimensional geological image of the project site. Using the descriptive drillers' logs in numerical format yielded useful cross sections identifying the Woodbine Austin Chalk contact and continuity of sand zones between wells. The geological data provided information about reservoir continuity, but not the amount of remaining oil, this was obtained using selective modern logs. Schlumberger logged the wells through 2 3/8 inch tubing with a new slimhole Reservoir Saturation Tool (RST) which can measure the oil and water content of the existing porosity, using neutron scattering and a gamma ray spectrometer (GST). The tool provided direct measurements of elemental content yielding interpretations of porosity, lithology, and oil and water content, confirming that significant oil saturation still exists, up to 50% in the upper Woodbine sand. Well testing was then begun and at the end of the project new oil was being produced from zones abandoned or bypassed more than 25 years ago.

    20. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege o

    1. Intranode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

      2013-07-23

      Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

    2. Intranode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

      2014-01-07

      Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a computer node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

    3. COMPUTATIONAL RESOURCES FOR BIOFUEL FEEDSTOCK SPECIES

      SciTech Connect (OSTI)

      Buell, Carol Robin; Childs, Kevin L

      2013-05-07

      While current production of ethanol as a biofuel relies on starch and sugar inputs, it is anticipated that sustainable production of ethanol for biofuel use will utilize lignocellulosic feedstocks. Candidate plant species to be used for lignocellulosic ethanol production include a large number of species within the Grass, Pine and Birch plant families. For these biofuel feedstock species, there are variable amounts of genome sequence resources available, ranging from complete genome sequences (e.g. sorghum, poplar) to transcriptome data sets (e.g. switchgrass, pine). These data sets are not only dispersed in location but also disparate in content. It will be essential to leverage and improve these genomic data sets for the improvement of biofuel feedstock production. The objectives of this project were to provide computational tools and resources for data-mining genome sequence/annotation and large-scale functional genomic datasets available for biofuel feedstock species. We have created a Bioenergy Feedstock Genomics Resource that provides a web-based portal or “clearing house” for genomic data for plant species relevant to biofuel feedstock production. Sequence data from a total of 54 plant species are included in the Bioenergy Feedstock Genomics Resource including model plant species that permit leveraging of knowledge across taxa to biofuel feedstock species.We have generated additional computational analyses of these data, including uniform annotation, to facilitate genomic approaches to improved biofuel feedstock production. These data have been centralized in the publicly available Bioenergy Feedstock Genomics Resource (http://bfgr.plantbiology.msu.edu/).

    4. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Engine Combustion/Modeling - Modelingadmin2015-10-28T01:54:52+00:00 Modelers at the CRF are developing high-fidelity simulation tools for engine combustion and detailed micro-kinetic, surface chemistry modeling tools for catalyst-based exhaust aftertreatment systems. The engine combustion modeling is focused on developing Large Eddy Simulation (LES). LES is being used with closely coupled key target experiments to reveal new understanding of the fundamental processes involved in engine

    5. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reacting Flow/Modeling - Modelingadmin2015-10-28T02:39:13+00:00 Turbulence models typically involve coarse-graining and/or time averaging. Though adequate for modeling mean transport, this approach does not address turbulence-microphysics interactions that are important in combustion processes. Subgrid models are developed to represent these interactions. The CRF has developed a fundamentally different representation of these interactions that does not involve distinct coarse-grained and subgrid

    6. computational-structural-mechanics-training

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Table of Contents Date Location Training Course: HyperMesh and HyperView April 12-14, 2011 Argonne TRACC Argonne, IL Introductory Course: Developing Compute-efficient, Quality Models with LS-PrePost® 3 on the TRACC Cluster October 21-22, 2010 Argonne TRACC West Chicago, IL Modeling and Simulation with LS-DYNA®: Insights into Modeling with a Goal of Providing Credible Predictive Simulations February 11-12, 2010 Argonne TRACC West Chicago, IL Introductory Course: Using LS-OPT® on the TRACC

    7. Indirect detection of gravitino dark matter including its three-body decays

      SciTech Connect (OSTI)

      Choi, Ki-Young; Restrepo, Diego; Yaguna, Carlos E.; Zapata, Oscar E-mail: restrepo@udea.edu.co E-mail: pfozapata@eia.edu.co

      2010-10-01

      It was recently pointed out that in supersymmetric scenarios with gravitino dark matter and bilinear R-parity violation, gravitinos with masses below M{sub W} typically decay with a sizable branching ratio into the 3-body final states W*l and Z*?. In this paper we study the indirect detection signatures of gravitino dark matter including such final states. First, we obtain the gamma ray spectrum from gravitino decays, which features a monochromatic contribution from the decay into ?? and a continuum contribution from the three-body decays. After studying its dependence on supersymmetric parameters, we compute the expected gamma ray fluxes and derive new constraints, from recent FERMI data, on the R-parity breaking parameter and on the gravitino lifetime. Indirect detection via antimatter searches, a new possibility brought about by the three-body final states, is also analyzed. For models compatible with the gamma ray observations, the positron signal is found to be negligible whereas the antiproton one can be significant.

    8. Fermilab | Science at Fermilab | Computing | High-performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lattice QCD Farm at the Grid Computing Center at Fermilab. Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer can perform billions of multiplication and addition operations each second. High-performance parallel computing becomes necessary when computations become too large or too long to complete on a single such machine. In parallel computing, computations are divided up so that many computers can work on the same problem at

    9. Nijmegen soft-core potential including two-meson exchange

      SciTech Connect (OSTI)

      Stoks, V.G.J.; Rijken, T.A.

      1995-05-10

      We report on the progress of the construction of the extended soft-core (ESC) Nijmegen potential. Next to the standard one-boson-exchange parts, the model includes the pion-meson-exchange potentials due to the parallel and crossed-box diagrams, as well as the one-pair and two-pair diagrams, vertices for which can be identified with similar interactions appearing in chiral-symmetric Lagrangians. Although the ESC potential is still under construction, it already gives an excellent description of all {ital NN} scattering data below 350 MeV with {chi}{sup 2}/datum=1.3. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.

    10. Aggregating job exit statuses of a plurality of compute nodes executing a parallel application

      DOE Patents [OSTI]

      Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Mundy, Michael B.

      2015-07-21

      Aggregating job exit statuses of a plurality of compute nodes executing a parallel application, including: identifying a subset of compute nodes in the parallel computer to execute the parallel application; selecting one compute node in the subset of compute nodes in the parallel computer as a job leader compute node; initiating execution of the parallel application on the subset of compute nodes; receiving an exit status from each compute node in the subset of compute nodes, where the exit status for each compute node includes information describing execution of some portion of the parallel application by the compute node; aggregating each exit status from each compute node in the subset of compute nodes; and sending an aggregated exit status for the subset of compute nodes in the parallel computer.

    11. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

      SciTech Connect (OSTI)

      Mike Bockelie; Dave Swensen; Martin Denison; Zumao Chen; Mike Maguire; Adel Sarofim; Changguan Yang; Hong-Shig Shim

      2004-01-28

      This is the thirteenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a Virtual Engineering-based framework for simulating the performance of Advanced Power Systems. Within the last quarter, good progress has been made on all aspects of the project. Software development efforts have focused on a preliminary detailed software design for the enhanced framework. Given the complexity of the individual software tools from each team (i.e., Reaction Engineering International, Carnegie Mellon University, Iowa State University), a robust, extensible design is required for the success of the project. In addition to achieving a preliminary software design, significant progress has been made on several development tasks for the program. These include: (1) the enhancement of the controller user interface to support detachment from the Computational Engine and support for multiple computer platforms, (2) modification of the Iowa State University interface-to-kernel communication mechanisms to meet the requirements of the new software design, (3) decoupling of the Carnegie Mellon University computational models from their parent IECM (Integrated Environmental Control Model) user interface for integration with the new framework and (4) development of a new CORBA-based model interfacing specification. A benchmarking exercise to compare process and CFD based models for entrained flow gasifiers was completed. A summary of our work on intrinsic kinetics for modeling coal gasification has been completed. Plans for implementing soot and tar models into our entrained flow gasifier models are outlined. Plans for implementing a model for mercury capture based on conventional capture technology, but applied to an IGCC system, are outlined.

    12. Progress report No. 56, October 1, 1979-September 30, 1980. [Courant Mathematics and Computing Lab. , New York Univ

      SciTech Connect (OSTI)

      1980-10-01

      Research during the period is sketched in a series of abstract-length summaries. The forte of the Laboratory lies in the development and analysis of mathematical models and efficient computing methods for the rapid solution of technological problems of interest to DOE, in particular, the detailed calculation on large computers of complicated fluid flows in which reactions and heat conduction may be taking place. The research program of the Laboratory encompasses two broad categories: analytical and numerical methods, which include applied analysis, computational mathematics, and numerical methods for partial differential equations, and advanced computer concepts, which include software engineering, distributed systems, and high-performance systems. Lists of seminars and publications are included. (RWR)

    13. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      warm dense matter experiments using the 3D ALE-AMR code and the move toward exascale computing Alice Koniges 1,a , Wangyi Liu 1 , John Barnard 2 , Alex Friedman 2 , Grant Logan 1 , David Eder 2 , Aaron Fisher 2 , Nathan Masters 2 , and Andrea Bertozzi 3 1 Lawrence Berkeley National Laboratory 2 Lawrence Livermore National Laboratory 3 University of California, Los Angeles Abstract. The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial

    14. Broadcasting collective operation contributions throughout a parallel computer

      DOE Patents [OSTI]

      Faraj, Ahmad (Rochester, MN)

      2012-02-21

      Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.

    15. Graph modeling systems and methods

      DOE Patents [OSTI]

      Neergaard, Mike

      2015-10-13

      An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.

    16. Multi Layer Contaminant Migration Model

      Energy Science and Technology Software Center (OSTI)

      1999-07-28

      This computer software augments and enhances certain calculation included in the previously copyrighted Vadose Zone Contaminant Migration Model. The computational method used in this model recognizes the heterogenous nature of the soils and attempts to account for the variability by using four separate layers to simulate the flow of water through the vadose zone. Therefore, the pore-water velocity calculated by the code will be different than the previous model because it accounts for a widermore » variety of soil properties encountered in the vadose zone. This model also performs an additional screening step than in the previous model. In this model the higher value of two different types of Soil Screening Levels are compared to soil concentrations of contaminants. If the contaminant concentration exceeds the highest of two SSLs, then that contaminant is listed. This is consistent with USEPA's Soil Screening Guidance.« less

    17. Pacing a data transfer operation between compute nodes on a parallel computer

      DOE Patents [OSTI]

      Blocksome, Michael A. (Rochester, MN)

      2011-09-13

      Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

    18. Foam process models.

      SciTech Connect (OSTI)

      Moffat, Harry K.; Noble, David R.; Baer, Thomas A.; Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann

      2008-09-01

      In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.

    19. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    20. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...