National Library of Energy BETA

Sample records for determination computational modeling

  1. Theory, Modeling and Computation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme...

  2. Computational procedures for determining parameters in Ramberg-Osgood

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    elastoplastic model based on modulus and damping versus strain (Technical Report) | SciTech Connect Computational procedures for determining parameters in Ramberg-Osgood elastoplastic model based on modulus and damping versus strain Citation Details In-Document Search Title: Computational procedures for determining parameters in Ramberg-Osgood elastoplastic model based on modulus and damping versus strain × You are accessing a document from the Department of Energy's (DOE) SciTech Connect.

  3. Determining Memory Use | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Determining Memory Use Determining the amount of memory available during the execution of the program requires the use of

  4. Determining Allocation Requirements | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Allocation Management Determining Allocation Requirements Querying Allocations Using cbank Mira/Cetus/Vesta Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Determining Allocation Requirements Estimating CPU-Hours for ALCF Blue Gene/Q Systems When estimating CPU-hours for the ALCF Blue Gene/Q systems, it is important to take into consideration the unique aspects of the Blue Gene

  5. LANL computer model boosts engine efficiency

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LANL computer model boosts engine efficiency LANL computer model boosts engine efficiency The KIVA model has been instrumental in helping researchers and manufacturers understand...

  6. Improved computer models support genetics research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    February Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and...

  7. 2014-02-21 Issuance: Proposed Determination of Computer Servers...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    a Covered Consumer Product; Withdrawal 2014-02-21 Issuance: Proposed Determination of Computer Servers as a Covered Consumer Product; Withdrawal This document is a pre-publication...

  8. Cupola Furnace Computer Process Model

    SciTech Connect (OSTI)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  9. Bayesian approaches for combining computational model output...

    Office of Scientific and Technical Information (OSTI)

    for combining computational model output and physical observations Authors: Higdon, David M 1 ; Lawrence, Earl 1 ; Heitmann, Katrin 2 ; Habib, Salman 2 + Show Author...

  10. Computable General Equilibrium Models for Sustainability Impact...

    Open Energy Info (EERE)

    Publications, Softwaremodeling tools User Interface: Other Website: iatools.jrc.ec.europa.eudocsecolecon2006.pdf Computable General Equilibrium Models for Sustainability...

  11. Section 23: Models and Computer Codes

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Application-2014 for the Waste Isolation Pilot Plant Models and Computer Codes (40 CFR 194.23) United States Department of Energy Waste Isolation Pilot Plant Carlsbad Field...

  12. Climate Modeling using High-Performance Computing

    SciTech Connect (OSTI)

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  13. Improved computer models support genetics research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    February » Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and computational methods provide insight into why genes are activated. February 8, 2013 When complete, these barriers will be a portion of the NMSSUP upgrade. This molecular structure depicts a yeast transfer ribonucleic acid (tRNA), which carries a single amino acid to the ribosome during protein construction. A combined

  14. Improved computer models support genetics research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and computational methods provide insight into why genes are activated. February 8, 2013 When complete, these barriers will be a portion of the NMSSUP upgrade. This molecular structure depicts a yeast transfer ribonucleic acid (tRNA), which carries a single amino acid to the ribosome during protein construction. A combined experimental and

  15. Low Mach Number Models in Computational Astrophysics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Ann Almgren Low Mach Number Models in Computational Astrophysics February 4, 2014 Ann Almgren. Berkeley Lab Downloads Almgren-nug2014.pdf | Adobe Acrobat PDF file Low Mach Number Models in Computational Astrophysics - Ann Almgren, Berkeley Lab Last edited: 2016-02-01 08:06:52

  16. LANL computer model boosts engine efficiency

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LANL computer model boosts engine efficiency LANL computer model boosts engine efficiency The KIVA model has been instrumental in helping researchers and manufacturers understand combustion processes, accelerate engine development and improve engine design and efficiency. September 25, 2012 KIVA simulation of an experimental engine with DOHC quasi-symmetric pent-roof combustion chamber and 4 valves. KIVA simulation of an experimental engine with DOHC quasi-symmetric pent-roof combustion chamber

  17. CUPOLA FURNACE COMPUTER PROCESS MODEL

    Office of Scientific and Technical Information (OSTI)

    ... p 809 (1995) 25. Clark D., Moore K., Stanek V., Katz S.: Neural network ... E. D., Clark D. E., Moore K. L.: AFS cupola model verification - initial investigations. ...

  18. Determining position inside building via laser rangefinder and handheld computer

    DOE Patents [OSTI]

    Ramsey, Jr. James L. (Albuquerque, NM); Finley, Patrick (Albuquerque, NM); Melton, Brad (Albuquerque, NM)

    2010-01-12

    An apparatus, computer software, and a method of determining position inside a building comprising selecting on a PDA at least two walls of a room in a digitized map of a building or a portion of a building, pointing and firing a laser rangefinder at corresponding physical walls, transmitting collected range information to the PDA, and computing on the PDA a position of the laser rangefinder within the room.

  19. Computer Model Buildings Contaminated with Radioactive Material

    Energy Science and Technology Software Center (OSTI)

    1998-05-19

    The RESRAD-BUILD computer code is a pathway analysis model designed to evaluate the potential radiological dose incurred by an individual who works or lives in a building contaminated with radioactive material.

  20. Significant Enhancement of Computational Efficiency in Nonlinear Multiscale Battery Model for Computer Aided Engineering

    SciTech Connect (OSTI)

    Smith, Kandler; Graf, Peter; Jun, Myungsoo; Yang, Chuanbo; Li, Genong; Li, Shaoping; Hochman, Amit; Tselepidakis, Dimitrios

    2015-06-09

    This presentation provides an update on improvements in computational efficiency in a nonlinear multiscale battery model for computer aided engineering.

  1. Preliminary Phase Field Computational Model Development

    SciTech Connect (OSTI)

    Li, Yulan; Hu, Shenyang Y.; Xu, Ke; Suter, Jonathan D.; McCloy, John S.; Johnson, Bradley R.; Ramuhalli, Pradeep

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in experiments, special experimental methods were devised to create similar boundary conditions in the iron films. Preliminary MFM studies conducted on single and polycrystalline iron films with small sub-areas created with focused ion beam have correlated quite well qualitatively with phase-field simulations. However, phase-field model dimensions are still small relative to experiments thus far. We are in the process of increasing the size of the models and decreasing specimen size so both have identical dimensions. Ongoing research is focused on validation of the phase-field model. Validation is being accomplished through comparison with experimentally obtained MFM images (in progress), and planned measurements of major hysteresis loops and first order reversal curves. Extrapolation of simulation sizes to represent a more stochastic bulk-like system will require sampling of various simulations (i.e., with single non-magnetic defect, single magnetic defect, single grain boundary, single dislocation, etc.) with distributions of input parameters. These outputs can then be compared to laboratory magnetic measurements and ultimately to simulate magnetic Barkhausen noise signals.

  2. Low Mach Number Models in Computational Astrophysics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    In memoriam: Michael Welcome 1957 - 2014 RIP Almgren CCSE Low Mach Number Models in Computational Astrophysics Ann Almgren Center for Computational Sciences and Engineering Lawrence Berkeley National Laboratory NUG 2014: NERSC@40 February 4, 2014 Collaborators: John Bell, Chris Malone, Andy Nonaka, Stan Woosley, Michael Zingale Almgren CCSE Introduction We often associate astrophysics with explosive phenomena: novae supernovae gamma-ray bursts X-ray bursts Type Ia Supernovae Largest

  3. Determining collective barrier operation skew in a parallel computer

    DOE Patents [OSTI]

    Faraj, Daniel A.

    2015-11-24

    Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by: identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.

  4. Determining collective barrier operation skew in a parallel computer

    DOE Patents [OSTI]

    Faraj, Daniel A.

    2015-12-24

    Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by: identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.

  5. MaRIE theory, modeling and computation roadmap executive summary...

    Office of Scientific and Technical Information (OSTI)

    Conference: MaRIE theory, modeling and computation roadmap executive summary Citation Details In-Document Search Title: MaRIE theory, modeling and computation roadmap executive ...

  6. Computational Fluid Dynamics Modeling of Diesel Engine Combustion...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Computational Fluid Dynamics Modeling of Diesel Engine Combustion and Emissions Computational Fluid Dynamics Modeling of Diesel Engine Combustion and Emissions 2005 Diesel Engine...

  7. Computational flow modeling of a simplified integrated tractor...

    Office of Scientific and Technical Information (OSTI)

    Computational flow modeling of a simplified integrated tractor-trailer geometry. Citation Details In-Document Search Title: Computational flow modeling of a simplified integrated...

  8. Computer modeling of the global warming effect

    SciTech Connect (OSTI)

    Washington, W.M.

    1993-12-31

    The state of knowledge of global warming will be presented and two aspects examined: observational evidence and a review of the state of computer modeling of climate change due to anthropogenic increases in greenhouse gases. Observational evidence, indeed, shows global warming, but it is difficult to prove that the changes are unequivocally due to the greenhouse-gas effect. Although observational measurements of global warming are subject to ``correction,`` researchers are showing consistent patterns in their interpretation of the data. Since the 1960s, climate scientists have been making their computer models of the climate system more realistic. Models started as atmospheric models and, through the addition of oceans, surface hydrology, and sea-ice components, they then became climate-system models. Because of computer limitations and the limited understanding of the degree of interaction of the various components, present models require substantial simplification. Nevertheless, in their present state of development climate models can reproduce most of the observed large-scale features of the real system, such as wind, temperature, precipitation, ocean current, and sea-ice distribution. The use of supercomputers to advance the spatial resolution and realism of earth-system models will also be discussed.

  9. MHK Reference Model: Relevance to Computer Simulation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Diana Bull Sandia National Laboratories July 9 th , 2012 SAND Number: 2012-5508P MHK Reference Model: Relevance to Computer Simulation Reference Model Partners Oregon State University /NNMREC University of Washington St. Anthony Falls Laboratory-UMinn Florida Atlantic University / SNMREC Cardinal Engineering WEC Design Operational Waves Profile Design of WEC--Performance Structural Design of WEC PTO Design Survival Waves Structural Design of WEC--Survivability Brake Design Anchor and Mooring

  10. Significant Enhancement of Computational Efficiency in Nonlinear Multiscale Battery Model for Computer Aided Engineering (Presentation)

    SciTech Connect (OSTI)

    Kim, G.; Pesaran, A.; Smith, K.; Graf, P.; Jun, M.; Yang, C.; Li, G.; Li, S.; Hochman, A.; Tselepidakis, D.; White, J.

    2014-06-01

    This presentation discusses the significant enhancement of computational efficiency in nonlinear multiscale battery model for computer aided engineering in current research at NREL.

  11. Wild Fire Computer Model Helps Firefighters

    ScienceCinema (OSTI)

    Canfield, Jesse

    2014-06-02

    A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

  12. COMPUTATIONAL MODELING OF CIRCULATING FLUIDIZED BED REACTORS

    SciTech Connect (OSTI)

    Ibrahim, Essam A

    2013-01-09

    Details of numerical simulations of two-phase gas-solid turbulent flow in the riser section of Circulating Fluidized Bed Reactor (CFBR) using Computational Fluid Dynamics (CFD) technique are reported. Two CFBR riser configurations are considered and modeled. Each of these two riser models consist of inlet, exit, connecting elbows and a main pipe. Both riser configurations are cylindrical and have the same diameter but differ in their inlet lengths and main pipe height to enable investigation of riser geometrical scaling effects. In addition, two types of solid particles are exploited in the solid phase of the two-phase gas-solid riser flow simulations to study the influence of solid loading ratio on flow patterns. The gaseous phase in the two-phase flow is represented by standard atmospheric air. The CFD-based FLUENT software is employed to obtain steady state and transient solutions for flow modulations in the riser. The physical dimensions, types and numbers of computation meshes, and solution methodology utilized in the present work are stated. Flow parameters, such as static and dynamic pressure, species velocity, and volume fractions are monitored and analyzed. The differences in the computational results between the two models, under steady and transient conditions, are compared, contrasted, and discussed.

  13. Review of the synergies between computational modeling and experimenta...

    Office of Scientific and Technical Information (OSTI)

    Accepted Manuscript: Review of the synergies between computational modeling and ... November 16, 2016 Prev Next Title: Review of the synergies between computational ...

  14. Computational Science Research in Support of Petascale Electromagnetic Modeling

    SciTech Connect (OSTI)

    Lee, L.-Q.; Akcelik, V; Ge, L; Chen, S; Schussman, G; Candel, A; Li, Z; Xiao, L; Kabel, A; Uplenchwar, R; Ng, C; Ko, K; /SLAC

    2008-06-20

    Computational science research components were vital parts of the SciDAC-1 accelerator project and are continuing to play a critical role in newly-funded SciDAC-2 accelerator project, the Community Petascale Project for Accelerator Science and Simulation (ComPASS). Recent advances and achievements in the area of computational science research in support of petascale electromagnetic modeling for accelerator design analysis are presented, which include shape determination of superconducting RF cavities, mesh-based multilevel preconditioner in solving highly-indefinite linear systems, moving window using h- or p- refinement for time-domain short-range wakefield calculations, and improved scalable application I/O.

  15. Modeling-Computer Simulations At Northern Basin & Range Region...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Northern Basin & Range Region (Pritchett, 2004) Exploration Activity...

  16. Modeling-Computer Simulations At Central Nevada Seismic Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Central Nevada Seismic Zone Region (Pritchett, 2004) Exploration...

  17. Modeling-Computer Simulations At Geysers Area (Goff & Decker...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Geysers Area (Goff & Decker, 1983) Exploration Activity Details...

  18. Modeling-Computer Simulations At Dixie Valley Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Dixie Valley Geothermal Area (Wisian & Blackwell, 2004) Exploration...

  19. Modeling-Computer Simulations At Raft River Geothermal Area ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Raft River Geothermal Area (1980) Exploration Activity Details...

  20. Modeling-Computer Simulations (Lewicki & Oldenburg, 2004) | Open...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations (Lewicki & Oldenburg, 2004) Exploration Activity Details Location...

  1. Modeling-Computer Simulations At Desert Peak Area (Wisian & Blackwell...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Desert Peak Area (Wisian & Blackwell, 2004) Exploration Activity...

  2. Modeling-Computer Simulations (Combs, Et Al., 1999) | Open Energy...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations (Combs, Et Al., 1999) Exploration Activity Details Location Unspecified...

  3. Modeling-Computer Simulations At Yellowstone Region (Laney, 2005...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Yellowstone Region (Laney, 2005) Exploration Activity Details Location...

  4. Modeling-Computer Simulations At Raft River Geothermal Area ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Raft River Geothermal Area (1979) Exploration Activity Details...

  5. Modeling-Computer Simulations At Raft River Geothermal Area ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Raft River Geothermal Area (1977) Exploration Activity Details...

  6. Modeling-Computer Simulations (Ozkocak, 1985) | Open Energy Informatio...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations (Ozkocak, 1985) Exploration Activity Details Location Unspecified...

  7. Modeling-Computer Simulations At White Mountains Area (Goff ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At White Mountains Area (Goff & Decker, 1983) Exploration Activity...

  8. Modeling-Computer Simulations At Stillwater Area (Wisian & Blackwell...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Stillwater Area (Wisian & Blackwell, 2004) Exploration Activity...

  9. Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal Area (Wilt & Haar, 1986)...

  10. Modeling-Computer Simulations At Dixie Valley Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Dixie Valley Geothermal Area (Kennedy & Soest, 2006) Exploration...

  11. Modeling-Computer Simulations (Ranalli & Rybach, 2005) | Open...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations (Ranalli & Rybach, 2005) Exploration Activity Details Location...

  12. Modeling-Computer Simulations At Raft River Geothermal Area ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Raft River Geothermal Area (1983) Exploration Activity Details...

  13. Determination of Diffusion Profiles in Altered Wellbore Cement Using X-ray Computed Tomography Methods

    SciTech Connect (OSTI)

    Mason, Harris E.; Walsh, Stuart D. C.; DuFrane, Wyatt L.; Carroll, Susan A.

    2014-06-17

    The development of accurate, predictive models for use in determining wellbore integrity requires detailed information about the chemical and mechanical changes occurring in hardened Portland cements. X-ray computed tomography (XRCT) provides a method that can nondestructively probe these changes in three dimensions. Here, we describe a method for extracting subvoxel mineralogical and chemical information from synchrotron XRCT images by combining advanced image segmentation with geochemical models of cement alteration. The method relies on determining effective linear activity coefficients (ELAC) for the white light source to generate calibration curves that relate the image grayscales to material composition. The resulting data set supports the modeling of cement alteration by CO2-rich brine with discrete increases in calcium concentration at reaction boundaries. The results of these XRCT analyses can be used to further improve coupled geochemical and mechanical models of cement alteration in the wellbore environment.

  14. Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Analysis Resources | Argonne National Laboratory Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer Modeling and Analysis Resources A private industry success story. PDF icon cat_cummins_computing_success_story_dec_

  15. MaRIE theory, modeling and computation roadmap executive summary

    Office of Scientific and Technical Information (OSTI)

    (Conference) | SciTech Connect Conference: MaRIE theory, modeling and computation roadmap executive summary Citation Details In-Document Search Title: MaRIE theory, modeling and computation roadmap executive summary The confluence of MaRIE (Matter-Radiation Interactions in Extreme) and extreme (exascale) computing timelines offers a unique opportunity in co-designing the elements of materials discovery, with theory and high performance computing, itself co-designed by constrained

  16. Predictive Capability Maturity Model for computational modeling and simulation.

    SciTech Connect (OSTI)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  17. Review of computational thermal-hydraulic modeling

    SciTech Connect (OSTI)

    Keefer, R.H.; Keeton, L.W.

    1995-12-31

    Corrosion of heat transfer tubing in nuclear steam generators has been a persistent problem in the power generation industry, assuming many different forms over the years depending on chemistry and operating conditions. Whatever the corrosion mechanism, a fundamental understanding of the process is essential to establish effective management strategies. To gain this fundamental understanding requires an integrated investigative approach that merges technology from many diverse scientific disciplines. An important aspect of an integrated approach is characterization of the corrosive environment at high temperature. This begins with a thorough understanding of local thermal-hydraulic conditions, since they affect deposit formation, chemical concentration, and ultimately corrosion. Computational Fluid Dynamics (CFD) can and should play an important role in characterizing the thermal-hydraulic environment and in predicting the consequences of that environment,. The evolution of CFD technology now allows accurate calculation of steam generator thermal-hydraulic conditions and the resulting sludge deposit profiles. Similar calculations are also possible for model boilers, so that tests can be designed to be prototypic of the heat exchanger environment they are supposed to simulate. This paper illustrates the utility of CFD technology by way of examples in each of these two areas. This technology can be further extended to produce more detailed local calculations of the chemical environment in support plate crevices, beneath thick deposits on tubes, and deep in tubesheet sludge piles. Knowledge of this local chemical environment will provide the foundation for development of mechanistic corrosion models, which can be used to optimize inspection and cleaning schedules and focus the search for a viable fix.

  18. Modeling of Geothermal Reservoirs: Fundamental Processes, Computer...

    Open Energy Info (EERE)

    of Geothermal Reservoirs: Fundamental Processes, Computer Simulation and Field Applications Jump to: navigation, search OpenEI Reference LibraryAdd to library Journal Article:...

  19. Modeling-Computer Simulations At Fish Lake Valley Area (Deymonaz...

    Open Energy Info (EERE)

    Fish Lake Valley Area (Deymonaz, Et Al., 2008) Jump to: navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Fish Lake Valley...

  20. Martin Karplus and Computer Modeling for Chemical Systems

    Office of Scientific and Technical Information (OSTI)

    Information Additional information about Martin Karplus, computer modeling, and chemical systems is available in electronic documents and on the Web. Documents: Comparison of 3D...

  1. Modeling-Computer Simulations At Nevada Test And Training Range...

    Open Energy Info (EERE)

    Nevada Test And Training Range Area (Sabin, Et Al., 2004) Jump to: navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Nevada...

  2. New partnership uses advanced computer science modeling to address...

    National Nuclear Security Administration (NNSA)

    partnership uses advanced computer science modeling to address climate change | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing...

  3. Modeling-Computer Simulations At Dixie Valley Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Dixie Valley Geothermal Area (Wannamaker, Et Al., 2006) Exploration...

  4. Modeling-Computer Simulations At Obsidian Cliff Area (Hulen,...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Obsidian Cliff Area (Hulen, Et Al., 2003) Exploration Activity Details...

  5. Modeling-Computer Simulations At Walker-Lane Transitional Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Walker-Lane Transitional Zone Region (Laney, 2005) Exploration...

  6. Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal Area (Roberts, Et Al., 1995)...

  7. Modeling-Computer Simulations At Long Valley Caldera Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Pribnow, Et Al., 2003)...

  8. Modeling-Computer Simulations At Hawthorne Area (Lazaro, Et Al...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Hawthorne Area (Lazaro, Et Al., 2010) Exploration Activity Details...

  9. Modeling-Computer Simulations At Walker-Lane Transitional Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Walker-Lane Transitional Zone Region (Pritchett, 2004) Exploration...

  10. Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area (Brown & DuTeaux, 1997) Exploration...

  11. Modeling-Computer Simulations At Coso Geothermal Area (1980)...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Coso Geothermal Area (1980) Exploration Activity Details Location Coso...

  12. Modeling-Computer Simulations At Long Valley Caldera Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Newman, Et Al., 2006) Exploration...

  13. Scientists use world's fastest computer to model materials under...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Materials under extreme conditions Scientists use world's fastest computer to model materials under extreme conditions Materials scientists are for the first time attempting to...

  14. Modeling-Computer Simulations At The Needles Area (Bell & Ramelli...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At The Needles Area (Bell & Ramelli, 2009) Exploration Activity Details...

  15. Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area (Goff & Decker, 1983) Exploration...

  16. Modeling-Computer Simulations At Long Valley Caldera Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Farrar, Et Al., 2003) Exploration...

  17. Modeling-Computer Simulations At Central Nevada Seismic Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Central Nevada Seismic Zone Region (Biasi, Et Al., 2009) Exploration...

  18. Modeling-Computer Simulations At Valles Caldera - Sulphur Springs...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Sulphur Springs Geothermal Area (Roberts, Et Al.,...

  19. Modeling-Computer Simulations At Nw Basin & Range Region (Pritchett...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Nw Basin & Range Region (Pritchett, 2004) Exploration Activity Details...

  20. Modeling-Computer Simulations At Long Valley Caldera Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Tempel, Et Al., 2011) Exploration...

  1. Modeling-Computer Simulations At Nw Basin & Range Region (Biasi...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Nw Basin & Range Region (Biasi, Et Al., 2009) Exploration Activity...

  2. Modeling-Computer Simulations At Coso Geothermal Area (2000)...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Coso Geothermal Area (2000) Exploration Activity Details Location Coso...

  3. Modeling-Computer Simulations At Northern Basin & Range Region...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Northern Basin & Range Region (Biasi, Et Al., 2009) Exploration...

  4. Modeling-Computer Simulations At Valles Caldera - Sulphur Springs...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Sulphur Springs Geothermal Area (Wilt & Haar, 1986)...

  5. Computer Modeling of Chemical and Geochemical Processes in High...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer modeling of chemical and geochemical processes in high ionic strength solutions is a unique capability within Sandia's Defense Waste Managment Programs located in...

  6. Modeling-Computer Simulations At Akutan Fumaroles Area (Kolker...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Akutan Fumaroles Area (Kolker, Et Al., 2010) Exploration Activity...

  7. Modeling-Computer Simulations At Walker-Lane Transitional Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Walker-Lane Transitional Zone Region (Biasi, Et Al., 2009) Exploration...

  8. Modeling-Computer Simulations At Coso Geothermal Area (1999)...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Coso Geothermal Area (1999) Exploration Activity Details Location Coso...

  9. Computer-Aided Construction of Chemical Kinetic Models

    SciTech Connect (OSTI)

    Green, William H.

    2014-12-31

    The combustion chemistry of even simple fuels can be extremely complex, involving hundreds or thousands of kinetically significant species. The most reasonable way to deal with this complexity is to use a computer not only to numerically solve the kinetic model, but also to construct the kinetic model in the first place. Because these large models contain so many numerical parameters (e.g. rate coefficients, thermochemistry) one never has sufficient data to uniquely determine them all experimentally. Instead one must work in predictive mode, using theoretical rather than experimental values for many of the numbers in the model, and as appropriate refining the most sensitive numbers through experiments. Predictive chemical kinetics is exactly what is needed for computer-aided design of combustion systems based on proposed alternative fuels, particularly for early assessment of the value and viability of proposed new fuels before those fuels are commercially available. This project was aimed at making accurate predictive chemical kinetics practical; this is a challenging goal which requires a range of science advances. The project spanned a wide range from quantum chemical calculations on individual molecules and elementary-step reactions, through the development of improved rate/thermo calculation procedures, the creation of algorithms and software for constructing and solving kinetic simulations, the invention of methods for model-reduction while maintaining error control, and finally comparisons with experiment. Many of the parameters in the models were derived from quantum chemistry calculations, and the models were compared with experimental data measured in our lab or in collaboration with others.

  10. Unsolicited Projects in 2012: Research in Computer Architecture, Modeling,

    Office of Science (SC) Website

    and Evolving MPI for Exascale | U.S. DOE Office of Science (SC) 2: Research in Computer Architecture, Modeling, and Evolving MPI for Exascale Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities

  11. Performance Modeling for 3D Visualization in a Heterogeneous Computing

    Office of Scientific and Technical Information (OSTI)

    Environment (Technical Report) | SciTech Connect Performance Modeling for 3D Visualization in a Heterogeneous Computing Environment Citation Details In-Document Search Title: Performance Modeling for 3D Visualization in a Heterogeneous Computing Environment The visualization of large, remotely located data sets necessitates the development of a distributed computing pipeline in order to reduce the data, in stages, to a manageable size. The required baseline infrastructure for launching such

  12. Ambient temperature modelling with soft computing techniques

    SciTech Connect (OSTI)

    Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni; De Felice, Matteo

    2010-07-15

    This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)

  13. Computational Fluid Dynamics Modeling of Diesel Engine Combustion and

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Emissions | Department of Energy Computational Fluid Dynamics Modeling of Diesel Engine Combustion and Emissions Computational Fluid Dynamics Modeling of Diesel Engine Combustion and Emissions 2005 Diesel Engine Emissions Reduction (DEER) Conference Presentations and Posters PDF icon 2005_deer_reitz.pdf More Documents & Publications Experiments and Modeling of Two-Stage Combustion in Low-Emissions Diesel Engines Comparison of Conventional Diesel and Reactivity Controlled Compression

  14. Modeling-Computer Simulations | Open Energy Information

    Open Energy Info (EERE)

    the risk of inaccurate predictions.1 Potential Pitfalls Uncertainties in initial reservoir conditions and other model inputs can cause inaccuracies in simulations, which...

  15. Improvements in fast-response flood modeling: desktop parallel computing and domain tracking

    SciTech Connect (OSTI)

    Judi, David R; Mcpherson, Timothy N; Burian, Steven J

    2009-01-01

    It is becoming increasingly important to have the ability to accurately forecast flooding, as flooding accounts for the most losses due to natural disasters in the world and the United States. Flood inundation modeling has been dominated by one-dimensional approaches. These models are computationally efficient and are considered by many engineers to produce reasonably accurate water surface profiles. However, because the profiles estimated in these models must be superimposed on digital elevation data to create a two-dimensional map, the result may be sensitive to the ability of the elevation data to capture relevant features (e.g. dikes/levees, roads, walls, etc...). Moreover, one-dimensional models do not explicitly represent the complex flow processes present in floodplains and urban environments and because two-dimensional models based on the shallow water equations have significantly greater ability to determine flow velocity and direction, the National Research Council (NRC) has recommended that two-dimensional models be used over one-dimensional models for flood inundation studies. This paper has shown that two-dimensional flood modeling computational time can be greatly reduced through the use of Java multithreading on multi-core computers which effectively provides a means for parallel computing on a desktop computer. In addition, this paper has shown that when desktop parallel computing is coupled with a domain tracking algorithm, significant computation time can be eliminated when computations are completed only on inundated cells. The drastic reduction in computational time shown here enhances the ability of two-dimensional flood inundation models to be used as a near-real time flood forecasting tool, engineering, design tool, or planning tool. Perhaps even of greater significance, the reduction in computation time makes the incorporation of risk and uncertainty/ensemble forecasting more feasible for flood inundation modeling (NRC 2000; Sayers et al. 2000).

  16. Scientists model brain structure to help computers recognize...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Do you see what I see? Scientists model brain structure to help computers recognize ... Introspectively, we know that the human brain solves this problem very well. We only have ...

  17. Computational Modeling for the American Chemical Society | GE...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computational Modeling for the American Chemical Society Click to email this to a friend (Opens in new window) Share on Facebook (Opens in new window) Click to share (Opens in new...

  18. Systems, methods and computer-readable media to model kinetic performance of rechargeable electrochemical devices

    DOE Patents [OSTI]

    Gering, Kevin L.

    2013-01-01

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a Butler-Volmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulse-time dependence, electrode surface availability, or a combination thereof. A set of sigmoid-based expressions may be included with the modified-BV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modified-BV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.

  19. Computational model of miniature pulsating heat pipes.

    SciTech Connect (OSTI)

    Martinez, Mario J.; Givler, Richard C.

    2013-01-01

    The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative three-year project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal ground-plane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.

  20. Hierarchical calibration of computer models (Conference) | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    Hierarchical calibration of computer models Citation Details In-Document Search Title: Hierarchical calibration of computer models × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy science and technology. A paper copy of this document is also available for sale to the public

  1. Towards a Computational Model of a Methane Producing Archaeum (Journal

    Office of Scientific and Technical Information (OSTI)

    Article) | SciTech Connect SciTech Connect Search Results Journal Article: Towards a Computational Model of a Methane Producing Archaeum Citation Details In-Document Search Title: Towards a Computational Model of a Methane Producing Archaeum Authors: Peterson, Joseph R. ; Labhsetwar, Piyush Search SciTech Connect for author "Labhsetwar, Piyush" Search SciTech Connect for ORCID "0000000159333609" Search orcid.org for ORCID "0000000159333609" ; Ellermeier, Jeremy

  2. Bayesian approaches for combining computational model output and physical

    Office of Scientific and Technical Information (OSTI)

    observations (Conference) | SciTech Connect Bayesian approaches for combining computational model output and physical observations Citation Details In-Document Search Title: Bayesian approaches for combining computational model output and physical observations Authors: Higdon, David M [1] ; Lawrence, Earl [1] ; Heitmann, Katrin [2] ; Habib, Salman [2] + Show Author Affiliations Los Alamos National Laboratory ANL Publication Date: 2011-07-25 OSTI Identifier: 1084581 Report Number(s):

  3. Cielo Computational Environment Usage Model With Mappings to ACE

    Office of Scientific and Technical Information (OSTI)

    Requirements for the General Availability User Environment Capabilities Release Version 1.1 (Technical Report) | SciTech Connect Technical Report: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1 Citation Details In-Document Search Title: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release

  4. Cielo Computational Environment Usage Model With Mappings to ACE

    Office of Scientific and Technical Information (OSTI)

    Requirements for the General Availability User Environment Capabilities Release Version 1.1 (Technical Report) | SciTech Connect Technical Report: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1 Citation Details In-Document Search Title: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release

  5. HIV virus spread and evolution studied through computer modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    HIV and evolution studied through computer modeling HIV virus spread and evolution studied through computer modeling This approach distinguishes between susceptible and infected individuals to capture the full infection history, including contact tracing data for infected individuals. November 19, 2013 Scanning electron micrograph of HIV-1 budding (in green) from cultured lymphocytes. The image has been colored to highlight important features. Scanning electron micrograph of HIV-1 budding (in

  6. Computer modeling reveals how surprisingly potent hepatitis C drug works

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Hepatitis C computer modeling Computer modeling reveals how surprisingly potent hepatitis C drug works A study reveals how daclatasvir targets one of its proteins and causes the fastest viral decline ever seen with anti-HCV drugs - within 12 hours of treatment. February 19, 2013 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy

  7. Computationally Efficient Modeling of High-Efficiency Clean Combustion

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Engines | Department of Energy 2 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting PDF icon ace012_flowers_2012_o.pdf More Documents & Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines Simulation of High Efficiency Clean Combustion Engines and Detailed Chemical Kinetic Mechanisms Development

  8. Use Computational Model to Design and Optimize Welding Conditions to

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Suppress Helium Cracking during Welding | Department of Energy Use Computational Model to Design and Optimize Welding Conditions to Suppress Helium Cracking during Welding Use Computational Model to Design and Optimize Welding Conditions to Suppress Helium Cracking during Welding Today, welding is widely used for repair, maintenance and upgrade of nuclear reactor components. As a critical technology to extend the service life of nuclear power plants beyond 60 years, weld technology must be

  9. Review of the synergies between computational modeling and experimental

    Office of Scientific and Technical Information (OSTI)

    characterization of materials across length scales (Journal Article) | DOE PAGES Accepted Manuscript: Review of the synergies between computational modeling and experimental characterization of materials across length scales This content will become publicly available on November 16, 2016 « Prev Next » Title: Review of the synergies between computational modeling and experimental characterization of materials across length scales With the increasing interplay between experimental and

  10. Towards a Computational Model of a Methane Producing Archaeum (Journal

    Office of Scientific and Technical Information (OSTI)

    Article) | DOE PAGES Towards a Computational Model of a Methane Producing Archaeum Title: Towards a Computational Model of a Methane Producing Archaeum Authors: Peterson, Joseph R. ; Labhsetwar, Piyush Search DOE PAGES for author "Labhsetwar, Piyush" Search DOE PAGES for ORCID "0000000159333609" Search orcid.org for ORCID "0000000159333609" ; Ellermeier, Jeremy R. ; Kohler, Petra R. A. ; Jain, Ankur Search DOE PAGES for author "Jain, Ankur" Search DOE

  11. 2014-02-21 Issuance: Proposed Determination of Computer and Battery...

    Broader source: Energy.gov (indexed) [DOE]

    public's access to this document. 2014-02-21 Proposed Determination of Computer and Battery Backup Systems as a Covered Consumer Product More Documents & Publications 2014-02-21...

  12. 2014-02-21 Issuance: Proposed Determination of Computer Servers as a Covered Consumer Product; Withdrawal

    Broader source: Energy.gov [DOE]

    This document is a pre-publication Federal Register notice withdrawing the previously proposed determination that computer servers qualify as a covered product, as issued by the Deputy Assistant Secretary for Energy Efficiency on February 21, 2014.

  13. 2014-02-21 Issuance: Proposed Determination of Computer and Battery Backup

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Systems as a Covered Consumer Product | Department of Energy and Battery Backup Systems as a Covered Consumer Product 2014-02-21 Issuance: Proposed Determination of Computer and Battery Backup Systems as a Covered Consumer Product This document is a pre-publication Federal Register notice of proposed determination regarding computer systems, as issued by the Deputy Assistant Secretary for Energy Efficiency on February 21, 2014. Though it is not intended or expected, should any discrepancy

  14. 2014-03-26 Issuance: Proposed Determination of Computer and Battery Backup

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Systems as a Covered Consumer Product; Extension of Public Comment Period | Department of Energy 26 Issuance: Proposed Determination of Computer and Battery Backup Systems as a Covered Consumer Product; Extension of Public Comment Period 2014-03-26 Issuance: Proposed Determination of Computer and Battery Backup Systems as a Covered Consumer Product; Extension of Public Comment Period This document is a pre-publication Federal Register notice extending the public comment period for the

  15. Systems, methods and computer-readable media for modeling cell performance fade of rechargeable electrochemical devices

    DOE Patents [OSTI]

    Gering, Kevin L

    2013-08-27

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constant-current pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constant-current pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.

  16. New Computer Model Pinpoints Prime Materials for Carbon Capture

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    New Computer Model Pinpoints Prime Materials for Carbon Capture New Computer Model Pinpoints Prime Materials for Carbon Capture July 17, 2012 NERSC Contact: Linda Vu, lvu@lbl.gov, +1 510 495 2402 UC Berkeley Contact: Robert Sanders, rsanders@berkeley.edu zeolite350.jpg One of the 50 best zeolite structures for capturing carbon dioxide. Zeolite is a porous solid made of silicon dioxide, or quartz. In the model, the red balls are oxygen, the tan balls are silicon. The blue-green area is where

  17. Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect (OSTI)

    Weinan E

    2012-03-29

    The main bottleneck in modeling transport in molecular devices is to develop the correct formulation of the problem and efficient algorithms for analyzing the electronic structure and dynamics using, for example, the time-dependent density functional theory. We have divided this task into several steps. The first step is to developing the right mathematical formulation and numerical algorithms for analyzing the electronic structure using density functional theory. The second step is to study time-dependent density functional theory, particularly the far-field boundary conditions. The third step is to study electronic transport in molecular devices. We are now at the end of the first step. Under DOE support, we have made subtantial progress in developing linear scaling and sub-linear scaling algorithms for electronic structure analysis. Although there has been a huge amount of effort in the past on developing linear scaling algorithms, most of the algorithms developed suffer from the lack of robustness and controllable accuracy. We have made the following progress: (1) We have analyzed thoroughly the localization properties of the wave-functions. We have developed a clear understanding of the physical as well as mathematical origin of the decay properties. One important conclusion is that even for metals, one can choose wavefunctions that decay faster than any algebraic power. (2) We have developed algorithms that make use of these localization properties. Our algorithms are based on non-orthogonal formulations of the density functional theory. Our key contribution is to add a localization step into the algorithm. The addition of this localization step makes the algorithm quite robust and much more accurate. Moreover, we can control the accuracy of these algorithms by changing the numerical parameters. (3) We have considerably improved the Fermi operator expansion (FOE) approach. Through pole expansion, we have developed the optimal scaling FOE algorithm.

  18. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect (OSTI)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  19. District-heating strategy model: computer programmer's manual

    SciTech Connect (OSTI)

    Kuzanek, J.F.

    1982-05-01

    The US Department of Housing and Urban Development (HUD) and the US Department of Energy (DOE) cosponsor a program aimed at increasing the number of district heating and cooling (DHC) systems. Such systems can reduce the amount and costs of fuels used to heat and cool buildings in a district. Twenty-eight communities have agreed to aid HUD in a national feasibility assessment of DHC systems. The HUD/DOE program entails technical assistance by Argonne National Laboratory and Oak Ridge National Laboratory. The assistance includes a computer program, called the district heating strategy model (DHSM), that performs preliminary calculations to analyze potential DHC systems. This report describes the general capabilities of the DHSM, provides historical background on its development, and explains the computer installation and operation of the model - including the data file structures and the options. Sample problems illustrate the structure of the various input data files, the interactive computer-output listings. The report is written primarily for computer programmers responsible for installing the model on their computer systems, entering data, running the model, and implementing local modifications to the code.

  20. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    SciTech Connect (OSTI)

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  1. Computationally Efficient Modeling of High-Efficiency Clean Combustion

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Engines | Department of Energy 0 DOE Vehicle Technologies and Hydrogen Programs Annual Merit Review and Peer Evaluation Meeting, June 7-11, 2010 -- Washington D.C. PDF icon ace012_aceves_2010_o.pdf More Documents & Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines

  2. LANL researchers use computer modeling to study HIV | National Nuclear

    National Nuclear Security Administration (NNSA)

    Security Administration researchers use computer modeling to study HIV | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo

  3. New partnership uses advanced computer science modeling to address climate

    National Nuclear Security Administration (NNSA)

    change | National Nuclear Security Administration partnership uses advanced computer science modeling to address climate change | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios

  4. Scientists use world's fastest computer to model materials under extreme

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    conditions Materials under extreme conditions Scientists use world's fastest computer to model materials under extreme conditions Materials scientists are for the first time attempting to create atomic-scale models that describe how voids are created, grow, and merge. October 30, 2009 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable

  5. Computer Modeling of Saltstone Landfills by Intera Environmental Consultants

    SciTech Connect (OSTI)

    Albenesius, E.L.

    2001-08-09

    This report summaries the computer modeling studies and how the results of these studies were used to estimate contaminant releases to the groundwater. These modeling studies were used to improve saltstone landfill designs and are the basis for the current reference design. With the reference landfill design, EPA Drinking Water Standards can be met for all chemicals and radionuclides contained in Savannah River Plant waste salts.

  6. Wind energy conversion system analysis model (WECSAM) computer program documentation

    SciTech Connect (OSTI)

    Downey, W T; Hendrick, P L

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation. Thus, any user-supplied data for WECS performance, application load, utility rates, or wind resource may be entered into the scratch file to override the default data-base value. After the model and the inputs required from the user and derived from the data base are described, the model output and the various output options that can be exercised by the user are detailed. The general operation is set forth and suggestions are made for efficient modes of operation. Sample listings of various input, output, and data-base files are appended. (LEW)

  7. A New Perspective for the Calibration of Computational Predictor Models.

    SciTech Connect (OSTI)

    Crespo, Luis Guillermo

    2014-11-01

    This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

  8. Fast, narrow-band computer model for radiation calculations

    SciTech Connect (OSTI)

    Yan, Z.; Holmstedt, G.

    1997-01-01

    A fast, narrow-band computer model, FASTNB, which predicts the radiation intensity in a general nonisothermal and nonhomogeneous combustion environment, has been developed. The spectral absorption coefficients of the combustion products, including carbon dioxide, water vapor, and soot, are calculated based on the narrow-band model. FASTNB provides an accurate calculation at reasonably high speed. Compared with Grosshandler`s narrow-band model, RADCAL, which has been verified quite extensively against experimental measurements, FASTNB is more than 20 times faster and gives almost exactly the same results.

  9. The origins of computer weather prediction and climate modeling

    SciTech Connect (OSTI)

    Lynch, Peter [Meteorology and Climate Centre, School of Mathematical Sciences, University College Dublin, Belfield (Ireland)], E-mail: Peter.Lynch@ucd.ie

    2008-03-20

    Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.

  10. Final Report: Center for Programming Models for Scalable Parallel Computing

    SciTech Connect (OSTI)

    Mellor-Crummey, John

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the leadership-class computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  11. Martin Karplus and Computer Modeling for Chemical Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Martin Karplus and Computer Modeling for Chemical Systems Resources with Additional Information * Karplus Equation Martin Karplus ©Portrait by N. Pitt, 9/10/03 Martin Karplus, the Theodore William Richards Professor of Chemistry Emeritus at Harvard, is one of three winners of the 2013 Nobel Prize in chemistry... The 83-year-old Vienna-born theoretical chemist, who is also affiliated with the Université de Strasbourg, Strasbourg, France, is a 1951 graduate of Harvard College and earned his

  12. ONSET OF CHAOS IN A MODEL OF QUANTUM COMPUTATION G. BERMAN; ET...

    Office of Scientific and Technical Information (OSTI)

    OF CHAOS IN A MODEL OF QUANTUM COMPUTATION G. BERMAN; ET AL 71 CLASSICAL AND QUANTUM MECHANICS, GENERAL PHYSICS; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, AND...

  13. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  14. Computer Modeling of Violent Intent: A Content Analysis Approach

    SciTech Connect (OSTI)

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  15. A computer program to determine the specific power of prismatic-core reactors

    SciTech Connect (OSTI)

    Dobranich, D.

    1987-05-01

    A computer program has been developed to determine the maximum specific power for prismatic-core reactors as a function of maximum allowable fuel temperature, core pressure drop, and coolant velocity. The prismatic-core reactors consist of hexagonally shaped fuel elements grouped together to form a cylindrically shaped core. A gas coolant flows axially through circular channels within the elements, and the fuel is dispersed within the solid element material either as a composite or in the form of coated pellets. Different coolant, fuel, coating, and element materials can be selected to represent different prismatic-core concepts. The computer program allows the user to divide the core into any arbitrary number of axial levels to account for different axial power shapes. An option in the program allows the automatic determination of the core height that results in the maximum specific power. The results of parametric specific power calculations using this program are presented for various reactor concepts.

  16. Determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation

    DOE Patents [OSTI]

    Blocksome, Michael A. (Rochester, MN)

    2011-12-20

    Methods, apparatus, and products are disclosed for determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation that includes, for each compute node in the set: initializing a barrier counter with no counter underflow interrupt; configuring, upon entering the barrier operation, the barrier counter with a value in dependence upon a number of compute nodes in the set; broadcasting, by a DMA engine on the compute node to each of the other compute nodes upon entering the barrier operation, a barrier control packet; receiving, by the DMA engine from each of the other compute nodes, a barrier control packet; modifying, by the DMA engine, the value for the barrier counter in dependence upon each of the received barrier control packets; exiting the barrier operation if the value for the barrier counter matches the exit value.

  17. Model independent determination of the {sigma} pole

    SciTech Connect (OSTI)

    Leutwyler, H.

    2008-08-31

    The first part of this report reviews recent developments at the interface between lattice work on QCD with light dynamical quarks, effective field theory and low energy precision experiments. Then I discuss how dispersion theory can be used to analyze the low energy structure of the {pi}{pi} scattering amplitude in a model independent manner. This leads to an exact formula for the mass and width of the lowest few resonances, in terms of observable quantities. As an application, I consider the pole position of the {sigma}, paying particular to error propagation in the numerical analysis. The report is based on work done in collaboration with Irinel Caprini and Gilberto Colangelo.

  18. Modeling the Fracture of Ice Sheets on Parallel Computers

    SciTech Connect (OSTI)

    Waisman, Haim; Tuminaro, Ray

    2013-10-10

    The objective of this project was to investigate the complex fracture of ice and understand its role within larger ice sheet simulations and global climate change. This objective was achieved by developing novel physics based models for ice, novel numerical tools to enable the modeling of the physics and by collaboration with the ice community experts. At the present time, ice fracture is not explicitly considered within ice sheet models due in part to large computational costs associated with the accurate modeling of this complex phenomena. However, fracture not only plays an extremely important role in regional behavior but also influences ice dynamics over much larger zones in ways that are currently not well understood. To this end, our research findings through this project offers significant advancement to the field and closes a large gap of knowledge in understanding and modeling the fracture of ice sheets in the polar regions. Thus, we believe that our objective has been achieved and our research accomplishments are significant. This is corroborated through a set of published papers, posters and presentations at technical conferences in the field. In particular significant progress has been made in the mechanics of ice, fracture of ice sheets and ice shelves in polar regions and sophisticated numerical methods that enable the solution of the physics in an efficient way.

  19. Cielo Computational Environment Usage Model With Mappings to...

    Office of Scientific and Technical Information (OSTI)

    Cielo is a massively parallel supercomputer funded by the DOENNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale ...

  20. Computational fluid dynamic modeling of fluidized-bed polymerization reactors

    SciTech Connect (OSTI)

    Rokkam, Ram

    2012-11-02

    Polyethylene is one of the most widely used plastics, and over 60 million tons are produced worldwide every year. Polyethylene is obtained by the catalytic polymerization of ethylene in gas and liquid phase reactors. The gas phase processes are more advantageous, and use fluidized-bed reactors for production of polyethylene. Since they operate so close to the melting point of the polymer, agglomeration is an operational concern in all slurry and gas polymerization processes. Electrostatics and hot spot formation are the main factors that contribute to agglomeration in gas-phase processes. Electrostatic charges in gas phase polymerization fluidized bed reactors are known to influence the bed hydrodynamics, particle elutriation, bubble size, bubble shape etc. Accumulation of electrostatic charges in the fluidized-bed can lead to operational issues. In this work a first-principles electrostatic model is developed and coupled with a multi-fluid computational fluid dynamic (CFD) model to understand the effect of electrostatics on the dynamics of a fluidized-bed. The multi-fluid CFD model for gas-particle flow is based on the kinetic theory of granular flows closures. The electrostatic model is developed based on a fixed, size-dependent charge for each type of particle (catalyst, polymer, polymer fines) phase. The combined CFD model is first verified using simple test cases, validated with experiments and applied to a pilot-scale polymerization fluidized-bed reactor. The CFD model reproduced qualitative trends in particle segregation and entrainment due to electrostatic charges observed in experiments. For the scale up of fluidized bed reactor, filtered models are developed and implemented on pilot scale reactor.

  1. Computational model for simulation small testing launcher, technical solution

    SciTech Connect (OSTI)

    Chelaru, Teodor-Viorel; Cristian, Barbu; Chelaru, Adrian

    2014-12-10

    The purpose of this paper is to present some aspects regarding the computational model and technical solutions for multistage suborbital launcher for testing (SLT) used to test spatial equipment and scientific measurements. The computational model consists in numerical simulation of SLT evolution for different start conditions. The launcher model presented will be with six degrees of freedom (6DOF) and variable mass. The results analysed will be the flight parameters and ballistic performances. The discussions area will focus around the technical possibility to realize a small multi-stage launcher, by recycling military rocket motors. From technical point of view, the paper is focused on national project 'Suborbital Launcher for Testing' (SLT), which is based on hybrid propulsion and control systems, obtained through an original design. Therefore, while classical suborbital sounding rockets are unguided and they use as propulsion solid fuel motor having an uncontrolled ballistic flight, SLT project is introducing a different approach, by proposing the creation of a guided suborbital launcher, which is basically a satellite launcher at a smaller scale, containing its main subsystems. This is why the project itself can be considered an intermediary step in the development of a wider range of launching systems based on hybrid propulsion technology, which may have a major impact in the future European launchers programs. SLT project, as it is shown in the title, has two major objectives: first, a short term objective, which consists in obtaining a suborbital launching system which will be able to go into service in a predictable period of time, and a long term objective that consists in the development and testing of some unconventional sub-systems which will be integrated later in the satellite launcher as a part of the European space program. This is why the technical content of the project must be carried out beyond the range of the existing suborbital vehicle programs towards the current technological necessities in the space field, especially the European one.

  2. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  3. Computer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    I. INTRODUCTION This paper presents several computational tools required for processing images of a heavy ion beam and estimating the magnetic field within a plasma. The...

  4. Computation Modeling and Assessment of Nanocoatings for Ultra Supercritical Boilers

    SciTech Connect (OSTI)

    J. Shingledecker; D. Gandy; N. Cheruvu; R. Wei; K. Chan

    2011-06-21

    Forced outages and boiler unavailability of coal-fired fossil plants is most often caused by fire-side corrosion of boiler waterwalls and tubing. Reliable coatings are required for Ultrasupercritical (USC) application to mitigate corrosion since these boilers will operate at a much higher temperatures and pressures than in supercritical (565 C {at} 24 MPa) boilers. Computational modeling efforts have been undertaken to design and assess potential Fe-Cr-Ni-Al systems to produce stable nanocrystalline coatings that form a protective, continuous scale of either Al{sub 2}O{sub 3} or Cr{sub 2}O{sub 3}. The computational modeling results identified a new series of Fe-25Cr-40Ni with or without 10 wt.% Al nanocrystalline coatings that maintain long-term stability by forming a diffusion barrier layer at the coating/substrate interface. The computational modeling predictions of microstructure, formation of continuous Al{sub 2}O{sub 3} scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. Advanced coatings, such as MCrAl (where M is Fe, Ni, or Co) nanocrystalline coatings, have been processed using different magnetron sputtering deposition techniques. Several coating trials were performed and among the processing methods evaluated, the DC pulsed magnetron sputtering technique produced the best quality coating with a minimum number of shallow defects and the results of multiple deposition trials showed that the process is repeatable. scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. The cyclic oxidation test results revealed that the nanocrystalline coatings offer better oxidation resistance, in terms of weight loss, localized oxidation, and formation of mixed oxides in the Al{sub 2}O{sub 3} scale, than widely used MCrAlY coatings. However, the ultra-fine grain structure in these coatings, consistent with the computational model predictions, resulted in accelerated Al diffusion from the coating into the substrate. An effective diffusion barrier interlayer coating was developed to prevent inward Al diffusion. The fire-side corrosion test results showed that the nanocrystalline coatings with a minimum number of defects have a great potential in providing corrosion protection. The coating tested in the most aggressive environment showed no evidence of coating spallation and/or corrosion attack after 1050 hours exposure. In contrast, evidence of coating spallation in isolated areas and corrosion attack of the base metal in the spalled areas were observed after 500 hours. These contrasting results after 500 and 1050 hours exposure suggest that the premature coating spallation in isolated areas may be related to the variation of defects in the coating between the samples. It is suspected that the cauliflower-type defects in the coating were presumably responsible for coating spallation in isolated areas. Thus, a defect free good quality coating is the key for the long-term durability of nanocrystalline coatings in corrosive environments. Thus, additional process optimization work is required to produce defect-free coatings prior to development of a coating application method for production parts.

  5. DETERMINATION OF HLW GLASS MELT RATE USING X-RAY COMPUTED TOMOGRAPHY

    SciTech Connect (OSTI)

    Choi, A.; Miller, D.; Immel, D.

    2011-10-06

    The purpose of the high-level waste (HLW) glass melt rate study is two-fold: (1) to gain a better understanding of the impact of feed chemistry on melt rate through bench-scale testing, and (2) to develop a predictive tool for melt rate in support of the on-going frit development efforts for the Defense Waste Processing Facility (DWPF). In particular, the focus is on predicting relative melt rates, not the absolute melt rates, of various HLW glass formulations solely based on feed chemistry, i.e., the chemistry of both waste and glass-forming frit for DWPF. Critical to the successful melt rate modeling is the accurate determination of the melting rates of various HLW glass formulations. The baseline procedure being used at the Savannah River National Laboratory (SRNL) is to; (1) heat a 4 inch-diameter stainless steel beaker containing a mixture of dried sludge and frit in a furnace for a preset period of time, (2) section the cooled beaker along its diameter, and (3) measure the average glass height across the sectioned face using a ruler. As illustrated in Figure 1-1, the glass height is measured for each of the 16 horizontal segments up to the red lines where relatively large-sized bubbles begin to appear. The linear melt rate (LMR) is determined as the average of all 16 glass height readings divided by the time during which the sample was kept in the furnace. This 'visual' method has proved useful in identifying melting accelerants such as alkalis and sulfate and further ranking the relative melt rates of candidate frits for a given sludge batch. However, one of the inherent technical difficulties of this method is to determine the glass height in the presence of numerous gas bubbles of varying sizes, which is prevalent especially for the higher-waste-loading glasses. That is, how the red lines are drawn in Figure 1-1 can be subjective and, therefore, may influence the resulting melt rates significantly. For example, if the red lines are drawn too low, a significant amount of glassy material interspersed among the gas bubbles will be excluded, thus underestimating the melt rate. Likewise, if they are drawn too high, many large voids will be counted as glass, thus overestimating the melt rate. As will be shown later in this report, there is also no guarantee that a given distribution of glass and gas bubbles along a particular sectioned plane will always be representative of the entire sample volume. Poor reproducibility seen in some LMR data may be related to these difficulties of the visual method. In addition, further improvement of the existing melt rate model requires that the overall impact of feed chemistry on melt rate be reflected on measured data at a greater quantitative resolution on a more consistent basis than the visual method can provide. An alternate method being pursued is X-ray computed tomography (CT). It involves X-ray scanning of glass samples, performing CT on the 2-D X-ray images to build 3-D volumetric data, and adaptive segmentation analysis of CT results to not only identify but quantify the distinct regions within each sample based on material density and morphologies. The main advantage of this new method is that it can determine the relative local density of the material remaining in the beaker after the heat treatment regardless of its morphological conditions by selectively excluding all the voids greater than a given volumetric pixel (voxel) size, thus eliminating much of the subjectivity involved in the visual method. As a result, the melt rate data obtained from CT scan will give quantitative descriptions not only on the fully-melted glass, but partially-melted and unmelted feed materials. Therefore, the CT data are presumed to be more reflective of the actual melt rate trends in continuously-fed melters than the visual data. In order to test the applicability of X-ray CT scan to the HLW glass melt rate study, several new series of HLW simulant/frit mixtures were melted in the Melt Rate Furnace (MRF) and the contents of each cooled but un-sectioned beaker were CT scanned and analyzed.

  6. Computational fluid dynamics modeling of coal gasification in a pressurized spout-fluid bed

    SciTech Connect (OSTI)

    Zhongyi Deng; Rui Xiao; Baosheng Jin; He Huang; Laihong Shen; Qilei Song; Qianjun Li

    2008-05-15

    Computational fluid dynamics (CFD) modeling, which has recently proven to be an effective means of analysis and optimization of energy-conversion processes, has been extended to coal gasification in this paper. A 3D mathematical model has been developed to simulate the coal gasification process in a pressurized spout-fluid bed. This CFD model is composed of gas-solid hydrodynamics, coal pyrolysis, char gasification, and gas phase reaction submodels. The rates of heterogeneous reactions are determined by combining Arrhenius rate and diffusion rate. The homogeneous reactions of gas phase can be treated as secondary reactions. A comparison of the calculated and experimental data shows that most gasification performance parameters can be predicted accurately. This good agreement indicates that CFD modeling can be used for complex fluidized beds coal gasification processes. 37 refs., 7 figs., 5 tabs.

  7. Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors

    SciTech Connect (OSTI)

    Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2007-06-01

    Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.

  8. Computational modeling of drug-resistant bacteria. Final report

    SciTech Connect (OSTI)

    MacDougall, Preston

    2015-03-12

    Initial proposal summary: The evolution of antibiotic-resistant mutants among bacteria (superbugs) is a persistent and growing threat to public health. In many ways, we are engaged in a war with these microorganisms, where the corresponding arms race involves chemical weapons and biological targets. Just as advances in microelectronics, imaging technology and feature recognition software have turned conventional munitions into smart bombs, the long-term objectives of this proposal are to develop highly effective antibiotics using next-generation biomolecular modeling capabilities in tandem with novel subatomic feature detection software. Using model compounds and targets, our design methodology will be validated with correspondingly ultra-high resolution structure-determination methods at premier DOE facilities (single-crystal X-ray diffraction at Argonne National Laboratory, and neutron diffraction at Oak Ridge National Laboratory). The objectives and accomplishments are summarized.

  9. The use of computed radiography plates to determine light and radiation field coincidence

    SciTech Connect (OSTI)

    Kerns, James R.; Anand, Aman

    2013-11-15

    Purpose: Photo-stimulable phosphor computed radiography (CR) has characteristics that allow the output to be manipulated by both radiation and optical light. The authors have developed a method that uses these characteristics to carry out radiation field and light field coincidence quality assurance on linear accelerators.Methods: CR detectors from Kodak were used outside their cassettes to measure both radiation and light field edges from a Varian linear accelerator. The CR detector was first exposed to a radiation field and then to a slightly smaller light field. The light impinged on the detector's latent image, removing to an extent the portion exposed to the light field. The detector was then digitally scanned. A MATLAB-based algorithm was developed to automatically analyze the images and determine the edges of the light and radiation fields, the vector between the field centers, and the crosshair center. Radiographic film was also used as a control to confirm the radiation field size.Results: Analysis showed a high degree of repeatability with the proposed method. Results between the proposed method and radiographic film showed excellent agreement of the radiation field. The effect of varying monitor units and light exposure time was tested and found to be very small. Radiation and light field sizes were determined with an uncertainty of less than 1 mm, and light and crosshair centers were determined within 0.1 mm.Conclusions: A new method was developed to digitally determine the radiation and light field size using CR photo-stimulable phosphor plates. The method is quick and reproducible, allowing for the streamlined and robust assessment of light and radiation field coincidence, with no observer interpretation needed.

  10. Computational Modeling and Assessment Of Nanocoatings for Ultra Supercritical Boilers

    SciTech Connect (OSTI)

    David W. Gandy; John P. Shingledecker

    2011-04-11

    Forced outages and boiler unavailability in conventional coal-fired fossil power plants is most often caused by fireside corrosion of boiler waterwalls. Industry-wide, the rate of wall thickness corrosion wastage of fireside waterwalls in fossil-fired boilers has been of concern for many years. It is significant that the introduction of nitrogen oxide (NOx) emission controls with staged burners systems has increased reported waterwall wastage rates to as much as 120 mils (3 mm) per year. Moreover, the reducing environment produced by the low-NOx combustion process is the primary cause of accelerated corrosion rates of waterwall tubes made of carbon and low alloy steels. Improved coatings, such as the MCrAl nanocoatings evaluated here (where M is Fe, Ni, and Co), are needed to reduce/eliminate waterwall damage in subcritical, supercritical, and ultra-supercritical (USC) boilers. The first two tasks of this six-task project-jointly sponsored by EPRI and the U.S. Department of Energy (DE-FC26-07NT43096)-have focused on computational modeling of an advanced MCrAl nanocoating system and evaluation of two nanocrystalline (iron and nickel base) coatings, which will significantly improve the corrosion and erosion performance of tubing used in USC boilers. The computational model results showed that about 40 wt.% is required in Fe based nanocrystalline coatings for long-term durability, leading to a coating composition of Fe-25Cr-40Ni-10 wt.% Al. In addition, the long term thermal exposure test results further showed accelerated inward diffusion of Al from the nanocrystalline coatings into the substrate. In order to enhance the durability of these coatings, it is necessary to develop a diffusion barrier interlayer coating such TiN and/or AlN. The third task 'Process Advanced MCrAl Nanocoating Systems' of the six-task project jointly sponsored by the Electric Power Research Institute, EPRI and the U.S. Department of Energy (DE-FC26-07NT43096)- has focused on processing of advanced nanocrystalline coating systems and development of diffusion barrier interlayer coatings. Among the diffusion interlayer coatings evaluated, the TiN interlayer coating was found to be the optimum one. This report describes the research conducted under the Task 3 workscope.

  11. Modeling-Computer Simulations At U.S. West Region (Sabin, Et...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At U.S. West Region (Sabin, Et Al., 2004) Exploration Activity Details...

  12. Modeling-Computer Simulations At Cove Fort Area (Toksoz, Et Al...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Cove Fort Area (Toksoz, Et Al, 2010) Exploration Activity Details...

  13. Modeling-Computer Simulations At U.S. West Region (Williams ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At U.S. West Region (Williams & Deangelo, 2008) Exploration Activity...

  14. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. ! Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13 Garching bei München, Germany, July 17, 2007 ScicomP 13, July 17, 2007, Garching Overview * About Bassi * Memory on Bassi * Large Page Memory (It's Great!) * System Configuration * Large Page

  15. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear

  16. Electrical utilities model for determining electrical distribution capacity

    SciTech Connect (OSTI)

    Fritz, R. L.

    1997-09-03

    In its simplest form, this model was to obtain meaningful data on the current state of the Site`s electrical transmission and distribution assets, and turn this vast collection of data into useful information. The resulting product is an Electrical Utilities Model for Determining Electrical Distribution Capacity which provides: current state of the electrical transmission and distribution systems; critical Hanford Site needs based on outyear planning documents; decision factor model. This model will enable Electrical Utilities management to improve forecasting requirements for service levels, budget, schedule, scope, and staffing, and recommend the best path forward to satisfy customer demands at the minimum risk and least cost to the government. A dynamic document, the model will be updated annually to reflect changes in Hanford Site activities.

  17. High-Performance Computer Modeling of the Cosmos-Iridium Collision

    SciTech Connect (OSTI)

    Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W

    2009-08-28

    This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

  18. Compensator models for fluence field modulated computed tomography

    SciTech Connect (OSTI)

    Bartolac, Steven; Jaffray, David; Radiation Medicine Program, Princess Margaret Hospital Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9

    2013-12-15

    Purpose: Fluence field modulated computed tomography (FFMCT) presents a novel approach for acquiring CT images, whereby a patient model guides dynamically changing fluence patterns in an attempt to achieve task-based, user-prescribed, regional variations in image quality, while also controlling dose to the patient. This work aims to compare the relative effectiveness of FFMCT applied to different thoracic imaging tasks (routine diagnostic CT, lung cancer screening, and cardiac CT) when the modulator is subject to limiting constraints, such as might be present in realistic implementations.Methods: An image quality plan was defined for a simulated anthropomorphic chest slice, including regions of high and low image quality, for each of the thoracic imaging tasks. Modulated fluence patterns were generated using a simulated annealing optimization script, which attempts to achieve the image quality plan under a global dosimetric constraint. Optimization was repeated under different types of modulation constraints (e.g., fixed or gantry angle dependent patterns, continuous or comprised of discrete apertures) with the most limiting case being a fixed conventional bowtie filter. For each thoracic imaging task, an image quality map (IQM{sub sd}) representing the regionally varying standard deviation is predicted for each modulation method and compared to the prescribed image quality plan as well as against results from uniform fluence fields. Relative integral dose measures were also compared.Results: Each IQM{sub sd} resulting from FFMCT showed improved agreement with planned objectives compared to those from uniform fluence fields for all cases. Dynamically changing modulation patterns yielded better uniformity, improved image quality, and lower dose compared to fixed filter patterns with optimized tube current. For the latter fixed filter cases, the optimal choice of tube current modulation was found to depend heavily on the task. Average integral dose reduction compared to a uniform fluence field ranged from 10% using a bowtie filter to 40% or greater using an idealized modulator.Conclusions: The results support that FFMCT may achieve regionally varying image quality distributions in good agreement with user-prescribed values, while limiting dose. The imposition of constraints inhibits dose reduction capacity and agreement with image quality plans but still yields significant improvement over what is afforded by conventional dose minimization techniques. These results suggest that FFMCT can be implemented effectively even when the modulator has limited modulation capabilities.

  19. Rapidly re-computable EEG (electroencephalography) forward models for realistic head shapes

    SciTech Connect (OSTI)

    Ermer, J. J.; Mosher, J. C.; Baillet, S.; Leahy, R. M.

    2001-01-01

    Solution of the EEG source localization (inverse) problem utilizing model-based methods typically requires a significant number of forward model evaluations. For subspace based inverse methods like MUSIC [6], the total number of forward model evaluations can often approach an order of 10{sup 3} or 10{sup 4}. Techniques based on least-squares minimization may require significantly more evaluations. The observed set of measurements over an M-sensor array is often expressed as a linear forward spatio-temporal model of the form: F = GQ + N (1) where the observed forward field F (M-sensors x N-time samples) can be expressed in terms of the forward model G, a set of dipole moment(s) Q (3xP-dipoles x N-time samples) and additive noise N. Because of their simplicity, ease of computation, and relatively good accuracy, multi-layer spherical models [7] (or fast approximations described in [1], [7]) have traditionally been the 'forward model of choice' for approximating the human head. However, approximation of the human head via a spherical model does have several key drawbacks. By its very shape, the use of a spherical model distorts the true distribution of passive currents in the skull cavity. Spherical models also require that the sensor positions be projected onto the fitted sphere (Fig. 1), resulting in a distortion of the true sensor-dipole spatial geometry (and ultimately the computed surface potential). The use of a single 'best-fitted' sphere has the added drawback of incomplete coverage of the inner skull region, often ignoring areas such as the frontal cortex. In practice, this problem is typically countered by fitting additional sphere(s) to those region(s) not covered by the primary sphere. The use of these additional spheres results in added complication to the forward model. Using high-resolution spatial information obtained via X-ray CT or MR imaging, a realistic head model can be formed by tessellating the head into a set of contiguous regions (typically the scalp, outer skull, and inner skull surfaces). Since accurate in vivo determination of internal conductivities is currently not currently possible, the head is typically assumed to consist of a set of contiguous isotropic regions, each with constant conductivity.

  20. COMPUTATIONAL FLUID DYNAMICS MODELING OF SCALED HANFORD DOUBLE SHELL TANK MIXING - CFD MODELING SENSITIVITY STUDY RESULTS

    SciTech Connect (OSTI)

    JACKSON VL

    2011-08-31

    The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

    1. Experimental Determination and Thermodynamic Modeling of Electrical Conductivity of SRS Waste Tank Supernate

      SciTech Connect (OSTI)

      Pike, J.; Reboul, S.

      2015-06-01

      SRS High Level Waste Tank Farm personnel rely on conductivity probes for detection of incipient overflow conditions in waste tanks. Minimal information is available concerning the sensitivity that must be achieved such that that liquid detection is assured. Overly sensitive electronics results in numerous nuisance alarms for these safety-related instruments. In order to determine the minimum sensitivity required of the probe, Tank Farm Engineering personnel need adequate conductivity data to improve the existing designs. Little or no measurements of liquid waste conductivity exist; however, the liquid phase of the waste consists of inorganic electrolytes for which the conductivity may be calculated. Savannah River Remediation (SRR) Tank Farm Facility Engineering requested SRNL to determine the conductivity of the supernate resident in SRS waste Tank 40 experimentally as well as computationally. In addition, SRNL was requested to develop a correlation, if possible, that would be generally applicable to liquid waste resident in SRS waste tanks. A waste sample from Tank 40 was analyzed for composition and electrical conductivity as shown in Table 4-6, Table 4-7, and Table 4-9. The conductivity for undiluted Tank 40 sample was 0.087 S/cm. The accuracy of OLI Analyzer was determined using available literature data. Overall, 95% of computed estimates of electrical conductivity are within 15% of literature values for component concentrations from 0 to 15 M and temperatures from 0 to 125 C. Though the computational results are generally in good agreement with the measured data, a small portion of literature data deviates as much as 76%. A simplified model was created that can be used readily to estimate electrical conductivity of waste solution in computer spreadsheets. The variability of this simplified approach deviates up to 140% from measured values. Generally, this model can be applied to estimate the conductivity within a factor of two. The comparison of the simplified model to pure component literature data suggests that the simplified model will tend to under estimate the electrical conductivity. Comparison of the computed Tank 40 conductivity with the measured conductivity shows good agreement within the range of deviation identified based on pure component literature data.

    2. A system analysis computer model for the High Flux Isotope Reactor (HFIRSYS Version 1)

      SciTech Connect (OSTI)

      Sozer, M.C.

      1992-04-01

      A system transient analysis computer model (HFIRSYS) has been developed for analysis of small break loss of coolant accidents (LOCA) and operational transients. The computer model is based on the Advanced Continuous Simulation Language (ACSL) that produces the FORTRAN code automatically and that provides integration routines such as the Gear`s stiff algorithm as well as enabling users with numerous practical tools for generating Eigen values, and providing debug outputs and graphics capabilities, etc. The HFIRSYS computer code is structured in the form of the Modular Modeling System (MMS) code. Component modules from MMS and in-house developed modules were both used to configure HFIRSYS. A description of the High Flux Isotope Reactor, theoretical bases for the modeled components of the system, and the verification and validation efforts are reported. The computer model performs satisfactorily including cases in which effects of structural elasticity on the system pressure is significant; however, its capabilities are limited to single phase flow. Because of the modular structure, the new component models from the Modular Modeling System can easily be added to HFIRSYS for analyzing their effects on system`s behavior. The computer model is a versatile tool for studying various system transients. The intent of this report is not to be a users manual, but to provide theoretical bases and basic information about the computer model and the reactor.

    3. Final Report for Integrated Multiscale Modeling of Molecular Computing Devices

      SciTech Connect (OSTI)

      Glotzer, Sharon C.

      2013-08-28

      In collaboration with researchers at Vanderbilt University, North Carolina State University, Princeton and Oakridge National Laboratory we developed multiscale modeling and simulation methods capable of modeling the synthesis, assembly, and operation of molecular electronics devices. Our role in this project included the development of coarse-grained molecular and mesoscale models and simulation methods capable of simulating the assembly of millions of organic conducting molecules and other molecular components into nanowires, crossbars, and other organized patterns.

    4. Mathematical modeling and computer simulation of processes in energy systems

      SciTech Connect (OSTI)

      Hanjalic, K.C. )

      1990-01-01

      This book is divided into the following chapters. Modeling techniques and tools (fundamental concepts of modeling); 2. Fluid flow, heat and mass transfer, chemical reactions, and combustion; 3. Processes in energy equipment and plant components (boilers, steam and gas turbines, IC engines, heat exchangers, pumps and compressors, nuclear reactors, steam generators and separators, energy transport equipment, energy convertors, etc.); 4. New thermal energy conversion technologies (MHD, coal gasification and liquefaction fluidized-bed combustion, pulse-combustors, multistage combustion, etc.); 5. Combined cycles and plants, cogeneration; 6. Dynamics of energy systems and their components; 7. Integrated approach to energy systems modeling, and 8. Application of modeling in energy expert systems.

    5. Modeling-Computer Simulations (Walker, Et Al., 2005) | Open Energy...

      Open Energy Info (EERE)

      occurrence model for geothermal systems based on fundamental geologic data. References J. D. Walker, A. E. Sabin, J. R. Unruh, J. Combs, F. C. Monastero (2005) Development Of...

    6. Modeling-Computer Simulations At Kilauea East Rift Geothermal...

      Open Energy Info (EERE)

      importance of water convection for distributing heat in the East Rift Zone. References Albert J. Rudman, David Epp (1983) Conduction Models Of The Temperature Distribution In The...

    7. Computer-Aided Construction of Combustion Chemistry Models

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Constructing Accurate Combustion Chemistry Models: Butanols William H. Green & Michael Harper MIT Dept. of Chem. Eng. CEFRC Annual Meeting, Sept. 2010 The people who did this work:...

    8. Modeling and Analysis of a Lunar Space Reactor with the Computer Code

      Office of Scientific and Technical Information (OSTI)

      RELAP5-3D/ATHENA (Conference) | SciTech Connect Conference: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA Citation Details In-Document Search Title: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE

    9. Computer support to run models of the atmosphere. Final report

      SciTech Connect (OSTI)

      Fung, I.

      1996-08-30

      This research is focused on a better quantification of the variations in CO{sub 2} exchanges between the atmosphere and biosphere and the factors responsible for these exchangers. The principal approach is to infer the variations in the exchanges from variations in the atmospheric CO{sub 2} distribution. The principal tool involves using a global three-dimensional tracer transport model to advect and convect CO{sub 2} in the atmosphere. The tracer model the authors used was developed at the Goddard institute for Space Studies (GISS) and is derived from the GISS atmospheric general circulation model. A special run of the GCM is made to save high-frequency winds and mixing statistics for the tracer model.

    10. Modeling-Computer Simulations (Gritto & Majer) | Open Energy...

      Open Energy Info (EERE)

      are shown in Figure 1. The parameters of the fault were modeled after Coates and Schoenberg (1995), where the orientation of the fault relative to the finite-difference grid...

    11. FINITE ELEMENT MODELS FOR COMPUTING SEISMIC INDUCED SOIL PRESSURES ON DEEPLY EMBEDDED NUCLEAR POWER PLANT STRUCTURES.

      SciTech Connect (OSTI)

      XU, J.; COSTANTINO, C.; HOFMAYER, C.

      2006-06-26

      PAPER DISCUSSES COMPUTATIONS OF SEISMIC INDUCED SOIL PRESSURES USING FINITE ELEMENT MODELS FOR DEEPLY EMBEDDED AND OR BURIED STIFF STRUCTURES SUCH AS THOSE APPEARING IN THE CONCEPTUAL DESIGNS OF STRUCTURES FOR ADVANCED REACTORS.

    12. Theoretical and computer models of detonation in solid explosives

      SciTech Connect (OSTI)

      Tarver, C.M.; Urtiew, P.A.

      1997-10-01

      Recent experimental and theoretical advances in understanding energy transfer and chemical kinetics have led to improved models of detonation waves in solid explosives. The Nonequilibrium Zeldovich - von Neumann - Doring (NEZND) model is supported by picosecond laser experiments and molecular dynamics simulations of the multiphonon up-pumping and internal vibrational energy redistribution (IVR) processes by which the unreacted explosive molecules are excited to the transition state(s) preceding reaction behind the leading shock front(s). High temperature, high density transition state theory calculates the induction times measured by laser interferometric techniques. Exothermic chain reactions form product gases in highly excited vibrational states, which have been demonstrated to rapidly equilibrate via supercollisions. Embedded gauge and Fabry-Perot techniques measure the rates of reaction product expansion as thermal and chemical equilibrium is approached. Detonation reaction zone lengths in carbon-rich condensed phase explosives depend on the relatively slow formation of solid graphite or diamond. The Ignition and Growth reactive flow model based on pressure dependent reaction rates and Jones-Wilkins-Lee (JWL) equations of state has reproduced this nanosecond time resolved experimental data and thus has yielded accurate average reaction zone descriptions in one-, two- and three- dimensional hydrodynamic code calculations. The next generation reactive flow model requires improved equations of state and temperature dependent chemical kinetics. Such a model is being developed for the ALE3D hydrodynamic code, in which heat transfer and Arrhenius kinetics are intimately linked to the hydrodynamics.

    13. Inductively coupled plasma-atomic emission spectroscopy: a computer controlled, scanning monochromator system for the rapid determination of the elements

      SciTech Connect (OSTI)

      Floyd, M.A.

      1980-03-01

      A computer controlled, scanning monochromator system specifically designed for the rapid, sequential determination of the elements is described. The monochromator is combined with an inductively coupled plasma excitation source so that elements at major, minor, trace, and ultratrace levels may be determined, in sequence, without changing experimental parameters other than the spectral line observed. A number of distinctive features not found in previously described versions are incorporated into the system here described. Performance characteristics of the entire system and several analytical applications are discussed.

    14. Computer model for characterizing, screening, and optimizing electrolyte systems

      SciTech Connect (OSTI)

      2015-06-15

      Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced models are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.

    15. CASTING DEFECT MODELING IN AN INTEGRATED COMPUTATIONAL MATERIALS ENGINEERING APPROACH

      SciTech Connect (OSTI)

      Sabau, Adrian S [ORNL

      2015-01-01

      To accelerate the introduction of new cast alloys, the simultaneous modeling and simulation of multiphysical phenomena needs to be considered in the design and optimization of mechanical properties of cast components. The required models related to casting defects, such as microporosity and hot tears, are reviewed. Three aluminum alloys are considered A356, 356 and 319. The data on calculated solidification shrinkage is presented and its effects on microporosity levels discussed. Examples are given for predicting microporosity defects and microstructure distribution for a plate casting. Models to predict fatigue life and yield stress are briefly highlighted here for the sake of completion and to illustrate how the length scales of the microstructure features as well as porosity defects are taken into account for modeling the mechanical properties. Thus, the data on casting defects, including microstructure features, is crucial for evaluating the final performance-related properties of the component. ACKNOWLEDGEMENTS This work was performed under a Cooperative Research and Development Agreement (CRADA) with the Nemak Inc., and Chrysler Co. for the project "High Performance Cast Aluminum Alloys for Next Generation Passenger Vehicle Engines. The author would also like to thank Amit Shyam for reviewing the paper and Andres Rodriguez of Nemak Inc. Research sponsored by the U. S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office, as part of the Propulsion Materials Program under contract DE-AC05-00OR22725 with UT-Battelle, LLC. Part of this research was conducted through the Oak Ridge National Laboratory's High Temperature Materials Laboratory User Program, which is sponsored by the U. S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Program.

    16. Accelerated Climate Modeling for Energy | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility An example of a Category 5 hurricane simulated by the CESM at 13 km resolution An example of a Category 5 hurricane simulated by the CESM at 13 km resolution. Precipitable water (gray scale) shows the detailed dynamical structure in the flow. Strong precipitation is overlaid in red. High resolution is necessary to simulate reasonable numbers of tropical cyclones including Category 4 and 5 storms. Alan Scott and Mark Taylor, Sandia National Laboratories Accelerated Climate Modeling

    17. Computational models for the berry phase in semiconductor quantum dots

      SciTech Connect (OSTI)

      Prabhakar, S. Melnik, R. V. N.; Sebetci, A.

      2014-10-06

      By developing a new model and its finite element implementation, we analyze the Berry phase low-dimensional semiconductor nanostructures, focusing on quantum dots (QDs). In particular, we solve the Schrdinger equation and investigate the evolution of the spin dynamics during the adiabatic transport of the QDs in the 2D plane along circular trajectory. Based on this study, we reveal that the Berry phase is highly sensitive to the Rashba and Dresselhaus spin-orbit lengths.

    18. Computer modeling of a CFB (circulating fluidized bed) gasifier

      SciTech Connect (OSTI)

      Gidaspow, D.; Ding, J.

      1990-06-01

      The overall objective of this investigation is to develop experimentally verified models for circulating fluidized bed (CFB) combustors. This report presents an extension of our cold flow modeling of a CFB given in our first quarterly report of this project and published in Numerical Methods for Multiphase Flows'' edited by I. Celik, D. Hughes, C. T. Crowe and D. Lankford, FED-Vol.91, American Society of Mechanical Engineering, pp47--56 (1990). The title of the paper is Multiphase Navier-Stokes Equation Solver'' by D. Gidaspow, J. Ding and U.K. Jayaswal. To the two dimensional code described in the above paper we added the energy equations and the conservation of species equations to describe a synthesis gas from char producer. Under the simulation conditions the injected oxygen reacted near the inlet. The solid-gas mixing was sufficiently rapid that no undesirable hot spots were produced. This simulation illustrates the code's capability to model CFB reactors. 15 refs., 20 figs.

    19. Computer model for characterizing, screening, and optimizing electrolyte systems

      Energy Science and Technology Software Center (OSTI)

      2015-06-15

      Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced modelsmore » are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.« less

    20. 2014-03-26 Issuance: Proposed Determination of Computer and Battery...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      This document is being made available through the Internet solely as a means to facilitate the public's access to this document. PDF icon 2014-03-26 Proposed Determination of ...

    1. Computational Human Performance Modeling For Alarm System Design

      SciTech Connect (OSTI)

      Jacques Hugo

      2012-07-01

      The introduction of new technologies like adaptive automation systems and advanced alarms processing and presentation techniques in nuclear power plants is already having an impact on the safety and effectiveness of plant operations and also the role of the control room operator. This impact is expected to escalate dramatically as more and more nuclear power utilities embark on upgrade projects in order to extend the lifetime of their plants. One of the most visible impacts in control rooms will be the need to replace aging alarm systems. Because most of these alarm systems use obsolete technologies, the methods, techniques and tools that were used to design the previous generation of alarm system designs are no longer effective and need to be updated. The same applies to the need to analyze and redefine operators alarm handling tasks. In the past, methods for analyzing human tasks and workload have relied on crude, paper-based methods that often lacked traceability. New approaches are needed to allow analysts to model and represent the new concepts of alarm operation and human-system interaction. State-of-the-art task simulation tools are now available that offer a cost-effective and efficient method for examining the effect of operator performance in different conditions and operational scenarios. A discrete event simulation system was used by human factors researchers at the Idaho National Laboratory to develop a generic alarm handling model to examine the effect of operator performance with simulated modern alarm system. It allowed analysts to evaluate alarm generation patterns as well as critical task times and human workload predicted by the system.

    2. R&D for computational cognitive and social models : foundations for model evaluation through verification and validation (final LDRD report).

      SciTech Connect (OSTI)

      Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.; McNamara, Laura A.; Trucano, Timothy Guy

      2008-09-01

      Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making.

    3. A Variable Refrigerant Flow Heat Pump Computer Model in EnergyPlus

      SciTech Connect (OSTI)

      Raustad, Richard A.

      2013-01-01

      This paper provides an overview of the variable refrigerant flow heat pump computer model included with the Department of Energy's EnergyPlusTM whole-building energy simulation software. The mathematical model for a variable refrigerant flow heat pump operating in cooling or heating mode, and a detailed model for the variable refrigerant flow direct-expansion (DX) cooling coil are described in detail.

    4. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model,...

    5. Verification of a VRF Heat Pump Computer Model in EnergyPlus

      SciTech Connect (OSTI)

      Nigusse, Bereket; Raustad, Richard

      2013-06-01

      This paper provides verification results of the EnergyPlus variable refrigerant flow (VRF) heat pump computer model using manufacturer's performance data. The paper provides an overview of the VRF model, presents the verification methodology, and discusses the results. The verification provides quantitative comparison of full and part-load performance to manufacturer's data in cooling-only and heating-only modes of operation. The VRF heat pump computer model uses dual range bi-quadratic performance curves to represent capacity and Energy Input Ratio (EIR) as a function of indoor and outdoor air temperatures, and dual range quadratic performance curves as a function of part-load-ratio for modeling part-load performance. These performance curves are generated directly from manufacturer's published performance data. The verification compared the simulation output directly to manufacturer's performance data, and found that the dual range equation fit VRF heat pump computer model predicts the manufacturer's performance data very well over a wide range of indoor and outdoor temperatures and part-load conditions. The predicted capacity and electric power deviations are comparbale to equation-fit HVAC computer models commonly used for packaged and split unitary HVAC equipment.

    6. A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers

      SciTech Connect (OSTI)

      Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan; Parker, Jack C.; Watson, David B; Jardine, Philip M

      2010-01-01

      Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code, the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.

    7. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    8. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

      SciTech Connect (OSTI)

      Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

      1985-04-01

      This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

    9. COMPARATIVE COMPUTATIONAL MODELING OF AIRFLOWS AND VAPOR DOSIMETY IN THE RESPIRATORY TRACTS OF RAT, MONKEY, AND HUMAN

      SciTech Connect (OSTI)

      Corley, Richard A.; Kabilan, Senthil; Kuprat, Andrew P.; Carson, James P.; Minard, Kevin R.; Jacob, Rick E.; Timchalk, Charles; Glenny, Robb W.; Pipavath, Sudhaker; Cox, Timothy C.; Wallis, Chris; Larson, Richard; Fanucchi, M.; Postlewait, Ed; Einstein, Daniel R.

      2012-07-01

      Coupling computational fluid dynamics (CFD) with physiologically based pharmacokinetic (PBPK) models is useful for predicting site-specific dosimetry of airborne materials in the respiratory tract and elucidating the importance of species differences in anatomy, physiology, and breathing patterns. Historically, these models were limited to discrete regions of the respiratory system. CFD/PBPK models have now been developed for the rat, monkey, and human that encompass airways from the nose or mouth to the lung. A PBPK model previously developed to describe acrolein uptake in nasal tissues was adapted to the extended airway models as an example application. Model parameters for each anatomic region were obtained from the literature, measured directly, or estimated from published data. Airflow and site-specific acrolein uptake patterns were determined under steadystate inhalation conditions to provide direct comparisons with prior data and nasalonly simulations. Results confirmed that regional uptake was dependent upon airflow rates and acrolein concentrations with nasal extraction efficiencies predicted to be greatest in the rat, followed by the monkey, then the human. For human oral-breathing simulations, acrolein uptake rates in oropharyngeal and laryngeal tissues were comparable to nasal tissues following nasal breathing under the same exposure conditions. For both breathing modes, higher uptake rates were predicted for lower tracheo-bronchial tissues of humans than either the rat or monkey. These extended airway models provide a unique foundation for comparing dosimetry across a significantly more extensive range of conducting airways in the rat, monkey, and human than prior CFD models.

    10. DOE Issues Funding Opportunity for Advanced Computational and Modeling Research for the Electric Power System

      Broader source: Energy.gov [DOE]

      The objective of this Funding Opportunity Announcement (FOA) is to leverage scientific advancements in mathematics and computation for application to power system models and software tools, with the long-term goal of enabling real-time protection and control based on wide-area sensor measurements.

    11. Technical Review of the CENWP Computational Fluid Dynamics Model of the John Day Dam Forebay

      SciTech Connect (OSTI)

      Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.

      2010-12-01

      The US Army Corps of Engineers Portland District (CENWP) has developed a computational fluid dynamics (CFD) model of the John Day forebay on the Columbia River to aid in the development and design of alternatives to improve juvenile salmon passage at the John Day Project. At the request of CENWP, Pacific Northwest National Laboratory (PNNL) Hydrology Group has conducted a technical review of CENWP's CFD model run in CFD solver software, STAR-CD. PNNL has extensive experience developing and applying 3D CFD models run in STAR-CD for Columbia River hydroelectric projects. The John Day forebay model developed by CENWP is adequately configured and validated. The model is ready for use simulating forebay hydraulics for structural and operational alternatives. The approach and method are sound, however CENWP has identified some improvements that need to be made for future models and for modifications to this existing model.

    12. Computational method and system for modeling, analyzing, and optimizing DNA amplification and synthesis

      DOE Patents [OSTI]

      Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.

      2010-05-04

      A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

    13. Computer modeling of electromagnetic edge containment in twin-roll casting

      SciTech Connect (OSTI)

      Chang, F.C.; Turner, L.R.; Hull, J.R.; Wang, Y.H.; Blazek, K.E.

      1998-07-01

      This paper presents modeling studies of magnetohydrodynamics (MHD) analysis in twin-roll casting. Argonne National Laboratory (ANL) and Inland Steel Company have worked together to develop a 3-D computer model that can predict eddy currents, fluid flows, and liquid metal containment for an electromagnetic (EM) edge containment device. This mathematical model can greatly shorten casting research on the use of EM fields for liquid metal containment and control. It can also optimize the existing casting processes and minimize expensive, time-consuming full-scale testing. The model was verified by comparing predictions with experimental results of liquid-metal containment and fluid flow in EM edge dams designed at Inland Steel for twin-roll casting. Numerical simulation was performed by coupling a three-dimensional (3-D) finite-element EM code (ELEKTRA) and a 3-D finite-difference fluids code (CaPS-EM) to solve Maxwell`s equations, Ohm`s law, Navier-Stokes equations, and transport equations of turbulence flow in a casting process that uses EM fields. ELEKTRA is able to predict the eddy-current distribution and electromagnetic forces in complex geometry. CaPS-EM is capable of modeling fluid flows with free-surfaces and dynamic rollers. The computed 3-D magnetic fields and induced eddy currents in ELEKTRA are used as input to flow-field computations in CaPS-EM. Results of the numerical simulation compared well with measurements obtained from both static and dynamic tests.

    14. Models the Electromagnetic Response of a 3D Distribution using MP COMPUTERS

      Energy Science and Technology Software Center (OSTI)

      1999-05-01

      EM3D models the electromagnetic response of a 3D distribution of conductivity, dielectric permittivity and magnetic permeability within the earth for geophysical applications using massively parallel computers. The simulations are carried out in the frequency domain for either electric or magnetic sources for either scattered or total filed formulations of Maxwell''s equations. The solution is based on the method of finite differences and includes absorbing boundary conditions so that responses can be modeled up into themore »radar range where wave propagation is dominant. Recent upgrades in the software include the incorporation of finite size sources, that in addition to dipolar source fields, and a low induction number preconditioner that can significantly reduce computational run times. A graphical user interface (GUI) is bundled with the software so that complicated 3D models can be easily constructed and simulated with the software. The GUI also allows for plotting of the output.« less

    15. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

      SciTech Connect (OSTI)

      Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

      2014-11-01

      The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).

    16. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

      2014-11-01

      The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation packagemorecapable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).less

    17. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

      SciTech Connect (OSTI)

      Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

      2011-06-01

      This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the experienced user-base and the experimental validation base was decaying away quickly.

    18. DualTrust: A Trust Management Model for Swarm-Based Autonomic Computing Systems

      SciTech Connect (OSTI)

      Maiden, Wendy M.

      2010-05-01

      Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm -- their lightweight, ephemeral nature and indirect communication -- make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarm-based autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarm-based autonomic computing system and describes a trust model that meets these requirements.

    19. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and...

    20. Superior model for fault tolerance computation in designing nano-sized circuit systems

      SciTech Connect (OSTI)

      Singh, N. S. S. Muthuvalu, M. S.; Asirvadam, V. S.

      2014-10-24

      As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalization of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.

    1. Compare Energy Use in Variable Refrigerant Flow Heat Pumps Field Demonstration and Computer Model

      SciTech Connect (OSTI)

      Sharma, Chandan; Raustad, Richard

      2013-06-01

      Variable Refrigerant Flow (VRF) heat pumps are often regarded as energy efficient air-conditioning systems which offer electricity savings as well as reduction in peak electric demand while providing improved individual zone setpoint control. One of the key advantages of VRF systems is minimal duct losses which provide significant reduction in energy use and duct space. However, there is limited data available to show their actual performance in the field. Since VRF systems are increasingly gaining market share in the US, it is highly desirable to have more actual field performance data of these systems. An effort was made in this direction to monitor VRF system performance over an extended period of time in a US national lab test facility. Due to increasing demand by the energy modeling community, an empirical model to simulate VRF systems was implemented in the building simulation program EnergyPlus. This paper presents the comparison of energy consumption as measured in the national lab and as predicted by the program. For increased accuracy in the comparison, a customized weather file was created by using measured outdoor temperature and relative humidity at the test facility. Other inputs to the model included building construction, VRF system model based on lab measured performance, occupancy of the building, lighting/plug loads, and thermostat set-points etc. Infiltration model inputs were adjusted in the beginning to tune the computer model and then subsequent field measurements were compared to the simulation results. Differences between the computer model results and actual field measurements are discussed. The computer generated VRF performance closely resembled the field measurements.

    2. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 1: Theory and Computational Model

      SciTech Connect (OSTI)

      Nichols, B.D.; Mueller, C.; Necker, G.A.; Travis, J.R.; Spore, J.W.; Lam, K.L.; Royl, P.; Redlinger, R.; Wilson, T.L.

      1998-10-01

      Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior (1) in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and (2) during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume III contains some of the assessments performed by LANL and FzK. GASFLOW is under continual development, assessment, and application by LANL and FzK. This manual is considered a living document and will be updated as warranted.

    3. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

      2014-07-28

      The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

    4. Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop W. Musial, M. Lawson, and S. Rooney National Renewable Energy Laboratory Technical Report NREL/TP-5000-57605 February 2013 NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency & Renewable Energy, operated by the Alliance for Sustainable Energy, LLC. National Renewable Energy Laboratory 15013 Denver West Parkway Golden, Colorado 80401 303-275-3000 *

    5. NREL Computer Models Integrate Wind Turbines with Floating Platforms (Fact Sheet)

      SciTech Connect (OSTI)

      Not Available

      2011-07-01

      Far off the shores of energy-hungry coastal cities, powerful winds blow over the open ocean, where the water is too deep for today's seabed-mounted offshore wind turbines. For the United States to tap into these vast offshore wind energy resources, wind turbines must be mounted on floating platforms to be cost effective. Researchers at the National Renewable Energy Laboratory (NREL) are supporting that development with computer models that allow detailed analyses of such floating wind turbines.

    6. Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop W. Musial, M. Lawson, and S. Rooney National Renewable Energy Laboratory Technical Report NREL/TP-5000-57605 February 2013 NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency & Renewable Energy, operated by the Alliance for Sustainable Energy, LLC. National Renewable Energy Laboratory 15013 Denver West Parkway Golden, Colorado 80401 303-275-3000 *

    7. Determining Interactions in PSA models: Application to a Space PSA

      SciTech Connect (OSTI)

      C. Smith; E. Borgonovo

      2010-06-01

      This paper addresses use of an importance measure interaction study of a probabilistic risk analysis (PSA) performed for a hypothetical aerospace lunar mission. The PSA methods used in this study follow the general guidance provided in the NASA Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners. For the PSA portion, we used phased-based event tree and fault tree logic structures are used to model a lunar mission, including multiple phases (from launch to return to the Earth surface) and multiple critical systems. Details of the analysis results are not provided in this paper instead specific basic events are denoted by number (e.g., the first event is 1, the second is 2, and so on). However, in the model, we used approximately 150 fault trees and over 800 basic events. Following analysis and truncation of cut sets, we were left with about 400 basic events to evaluate. We used this model to explore interactions between different basic events and systems. These sensitivity studies provide high-level insights into features of the PSA for the hypothetical lunar mission.

    8. Review of the synergies between computational modeling and experimental characterization of materials across length scales

      SciTech Connect (OSTI)

      Dingreville, Rémi; Karnesky, Richard A.; Puel, Guillaume; Schmitt, Jean -Hubert

      2015-11-16

      With the increasing interplay between experimental and computational approaches at multiple length scales, new research directions are emerging in materials science and computational mechanics. Such cooperative interactions find many applications in the development, characterization and design of complex material systems. This manuscript provides a broad and comprehensive overview of recent trends in which predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure–property relationships and study various physical phenomena and mechanisms. The focus of this review is on the intersections of multiscale materials experiments and modeling relevant to the materials mechanics community. After a general discussion on the perspective from various communities, the article focuses on the latest experimental and theoretical opportunities. Emphasis is given to the role of experiments in multiscale models, including insights into how computations can be used as discovery tools for materials engineering, rather than to “simply” support experimental work. This is illustrated by examples from several application areas on structural materials. In conclusion this manuscript ends with a discussion on some problems and open scientific questions that are being explored in order to advance this relatively new field of research.

    9. Determination

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Determination of a Minimum Soiling Level to Affect Photovoltaic Devices Patrick D. Burton and Bruce H. King Sandia National Laboratories, Albuquerque, NM 87185 USA pdburto@sandia.gov Abstract-Soil accumulation on photovoltaic (PV) modules presents a challenge to long-term performance prediction and lifetime estimates due to the inherent difficulty in quantifying small changes over an extended period. Low mass loadings of soil are a common occurrence, but remain difficult to quantify. In order to

    10. Efficient Computation of Info-Gap Robustness for Finite Element Models

      SciTech Connect (OSTI)

      Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

      2012-07-05

      A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

    11. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

      SciTech Connect (OSTI)

      Yock, Adam D. Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

      2014-05-15

      Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: ?11.6%23.8%) and 14.6% (range: ?7.3%27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: ?6.8%40.3%) and 13.1% (range: ?1.5%52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: ?11.1%20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

    12. Computer modeling of electromagnetic fields and fluid flows for edge containment in continuous casting

      SciTech Connect (OSTI)

      Chang, F.C.; Hull, J.R.; Wang, Y.H.; Blazek, K.E.

      1996-02-01

      A computer model was developed to predict eddy currents and fluid flows in molten steel. The model was verified by comparing predictions with experimental results of liquid-metal containment and fluid flow in electromagnetic (EM) edge dams (EMDs) designed at Inland Steel for twin-roll casting. The model can optimize the EMD design so it is suitable for application, and minimize expensive, time-consuming full-scale testing. Numerical simulation was performed by coupling a three-dimensional (3-D) finite-element EM code (ELEKTRA) and a 3-D finite-difference fluids code (CaPS-EM) to solve heat transfer, fluid flow, and turbulence transport in a casting process that involves EM fields. ELEKTRA is able to predict the eddy- current distribution and the electromagnetic forces in complex geometries. CaPS-EM is capable of modeling fluid flows with free surfaces. Results of the numerical simulation compared well with measurements obtained from a static test.

    13. An integrated computer modeling environment for regional land use, air quality, and transportation planning

      SciTech Connect (OSTI)

      Hanley, C.J.; Marshall, N.L.

      1997-04-01

      The Land Use, Air Quality, and Transportation Integrated Modeling Environment (LATIME) represents an integrated approach to computer modeling and simulation of land use allocation, travel demand, and mobile source emissions for the Albuquerque, New Mexico, area. This environment provides predictive capability combined with a graphical and geographical interface. The graphical interface shows the causal relationships between data and policy scenarios and supports alternative model formulations. Scenarios are launched from within a Geographic Information System (GIS), and data produced by each model component at each time step within a simulation is stored in the GIS. A menu-driven query system is utilized to review link-based results and regional and area-wide results. These results can also be compared across time or between alternative land use scenarios. Using this environment, policies can be developed and implemented based on comparative analysis, rather than on single-step future projections. 16 refs., 3 figs., 2 tabs.

    14. Complex functionality with minimal computation. Promise and pitfalls of reduced-tracer ocean biogeochemistry models

      SciTech Connect (OSTI)

      Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh

      2015-12-21

      Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. Lastly, these results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate ‘‘sub-ecosystem-scale’’ parameterizations.

    15. Complex functionality with minimal computation. Promise and pitfalls of reduced-tracer ocean biogeochemistry models

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; et al

      2015-12-21

      Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded inmore » the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. Lastly, these results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate ‘‘sub-ecosystem-scale’’ parameterizations.« less

    16. CFD [computational fluid dynamics] And Safety Factors. Computer modeling of complex processes needs old-fashioned experiments to stay in touch with reality.

      SciTech Connect (OSTI)

      Leishear, Robert A.; Lee, Si Y.; Poirier, Michael R.; Steeper, Timothy J.; Ervin, Robert C.; Giddings, Billy J.; Stefanko, David B.; Harp, Keith D.; Fowley, Mark D.; Van Pelt, William B.

      2012-10-07

      Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a full-scale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A full-scale, low-flow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27-inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied significantly from the average.

    17. Computational Fluid Dynamics (CFD) Modeling for High Rate Pulverized Coal Injection (PCI) into the Blast Furnace

      SciTech Connect (OSTI)

      Dr. Chenn Zhou

      2008-10-15

      Pulverized coal injection (PCI) into the blast furnace (BF) has been recognized as an effective way to decrease the coke and total energy consumption along with minimization of environmental impacts. However, increasing the amount of coal injected into the BF is currently limited by the lack of knowledge of some issues related to the process. It is therefore important to understand the complex physical and chemical phenomena in the PCI process. Due to the difficulty in attaining trus BF measurements, Computational fluid dynamics (CFD) modeling has been identified as a useful technology to provide such knowledge. CFD simulation is powerful for providing detailed information on flow properties and performing parametric studies for process design and optimization. In this project, comprehensive 3-D CFD models have been developed to simulate the PCI process under actual furnace conditions. These models provide raceway size and flow property distributions. The results have provided guidance for optimizing the PCI process.

    18. Computational Model of Population Dynamics Based on the Cell Cycle and Local Interactions

      SciTech Connect (OSTI)

      Oprisan, Sorinel Adrian; Oprisan, Ana

      2005-03-31

      Our study bridges cellular (mesoscopic) level interactions and global population (macroscopic) dynamics of carcinoma. The morphological differences and transitions between well and smooth defined benign tumors and tentacular malignat tumors suggest a theoretical analysis of tumor invasion based on the development of mathematical models exhibiting bifurcations of spatial patterns in the density of tumor cells. Our computational model views the most representative and clinically relevant features of oncogenesis as a fight between two distinct sub-systems: the immune system of the host and the neoplastic system. We implemented the neoplastic sub-system using a three-stage cell cycle: active, dormant, and necrosis. The second considered sub-system consists of cytotoxic active (effector) cells -- EC, with a very broad phenotype ranging from NK cells to CTL cells, macrophages, etc. Based on extensive numerical simulations, we correlated the fractal dimensions for carcinoma, which could be obtained from tumor imaging, with the malignat stage. Our computational model was able to also simulate the effects of surgical, chemotherapeutical, and radiotherapeutical treatments.

    19. Enabling a Highly-Scalable Global Address Space Model for Petascale Computing

      SciTech Connect (OSTI)

      Apra, Edoardo; Vetter, Jeffrey S; Yu, Weikuan

      2010-01-01

      Over the past decade, the trajectory to the petascale has been built on increased complexity and scale of the underlying parallel architectures. Meanwhile, software de- velopers have struggled to provide tools that maintain the productivity of computational science teams using these new systems. In this regard, Global Address Space (GAS) programming models provide a straightforward and easy to use addressing model, which can lead to improved produc- tivity. However, the scalability of GAS depends directly on the design and implementation of the runtime system on the target petascale distributed-memory architecture. In this paper, we describe the design, implementation, and optimization of the Aggregate Remote Memory Copy Interface (ARMCI) runtime library on the Cray XT5 2.3 PetaFLOPs computer at Oak Ridge National Laboratory. We optimized our implementation with the flow intimation technique that we have introduced in this paper. Our optimized ARMCI implementation improves scalability of both the Global Arrays (GA) programming model and a real-world chemistry application NWChem from small jobs up through 180,000 cores.

    20. Swelling in light water reactor internal components: Insights from computational modeling

      SciTech Connect (OSTI)

      Stoller, Roger E.; Barashev, Alexander V.; Golubov, Stanislav I.

      2015-08-01

      A modern cluster dynamics model has been used to investigate the materials and irradiation parameters that control microstructural evolution under the relatively low-temperature exposure conditions that are representative of the operating environment for in-core light water reactor components. The focus is on components fabricated from austenitic stainless steel. The model accounts for the synergistic interaction between radiation-produced vacancies and the helium that is produced by nuclear transmutation reactions. Cavity nucleation rates are shown to be relatively high in this temperature regime (275 to 325C), but are sensitive to assumptions about the fine scale microstructure produced under low-temperature irradiation. The cavity nucleation rates observed run counter to the expectation that void swelling would not occur under these conditions. This expectation was based on previous research on void swelling in austenitic steels in fast reactors. This misleading impression arose primarily from an absence of relevant data. The results of the computational modeling are generally consistent with recent data obtained by examining ex-service components. However, it has been shown that the sensitivity of the model s predictions of low-temperature swelling behavior to assumptions about the primary damage source term and specification of the mean-field sink strengths is somewhat greater that that observed at higher temperatures. Further assessment of the mathematical model is underway to meet the long-term objective of this research, which is to provide a predictive model of void swelling at relevant lifetime exposures to support extended reactor operations.

    1. New Set of Computational Tools and Models Expected to Help Enable Rapid Development and Deployment of Carbon Capture Technologies

      Broader source: Energy.gov [DOE]

      An eagerly anticipated suite of 21 computational tools and models to help enable rapid development and deployment of new carbon capture technologies is now available from the Carbon Capture Simulation Initiative.

    2. Validation of the thermospheric vector spherical harmonic (VSH) computer model. Master's thesis

      SciTech Connect (OSTI)

      Davis, J.L.

      1991-01-01

      A semi-empirical computer model of the lower thermosphere has been developed that provides a description of the composition and dynamics of the thermosphere (Killeen et al., 1992). Input variables needed to run the VSH model include time, space and geophysical conditions. One of the output variables the model provides, neutral density, is of particular interest to the U.S. Air Force. Neutral densities vary both as a result of change in solar flux (eg. the solar cycle) and as a result of changes in the magnetosphere (eg. large changes occur in neutral density during geomagnetic storms). Satellites in earth orbit experience aerodynamic drag due to the atmospheric density of the thermosphere. Variability in the neutral density described above affects the drag a satellite experiences and as a result can change the orbital characteristics of the satellite. These changes make it difficult to track the satellite's position. Therefore, it is particularly important to insure that the accuracy of the model's neutral density is optimized for all input parameters. To accomplish this, a validation program was developed to evaluate the strengths and weaknesses of the model's density output by comparing it to SETA-2 (satellite electrostatic accelerometer) total mass density measurements.

    3. In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation

      SciTech Connect (OSTI)

      G. R. Odette; G. E. Lucas

      2005-11-15

      This final report on "In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation" (DE-FG03-01ER54632) consists of a series of summaries of work that has been published, or presented at meetings, or both. It briefly describes results on the following topics: 1) A Transport and Fate Model for Helium and Helium Management; 2) Atomistic Studies of Point Defect Energetics, Dynamics and Interactions; 3) Multiscale Modeling of Fracture consisting of: 3a) A Micromechanical Model of the Master Curve (MC) Universal Fracture Toughness-Temperature Curve Relation, KJc(T - To), 3b) An Embrittlement DTo Prediction Model for the Irradiation Hardening Dominated Regime, 3c) Non-hardening Irradiation Assisted Thermal and Helium Embrittlement of 8Cr Tempered Martensitic Steels: Compilation and Analysis of Existing Data, 3d) A Model for the KJc(T) of a High Strength NFA MA957, 3e) Cracked Body Size and Geometry Effects of Measured and Effective Fracture Toughness-Model Based MC and To Evaluations of F82H and Eurofer 97, 3-f) Size and Geometry Effects on the Effective Toughness of Cracked Fusion Structures; 4) Modeling the Multiscale Mechanics of Flow Localization-Ductility Loss in Irradiation Damaged BCC Alloys; and 5) A Universal Relation Between Indentation Hardness and True Stress-Strain Constitutive Behavior. Further details can be found in the cited references or presentations that generally can be accessed on the internet, or provided upon request to the authors. Finally, it is noted that this effort was integrated with our base program in fusion materials, also funded by the DOE OFES.

    4. Why applicants should use computer simulation models to comply with the FERC`s new merger policy

      SciTech Connect (OSTI)

      Frankena, M.W.; Morris, J.R.

      1997-02-01

      Computer models for electric utility use in complying with the US Federal Energy Regulatory Commission policy on mergers are described. Four types of simulation models that are widely used in the electric power industry are considered as tools for analyzing market power issues: dispatch/transportation models, dispatch/unit-commitment models, load-flow models, and load-flow/dispatch models. Basic model capabilities and limitations are described. Uses of the models for other purposes are also noted, including regulatory filings, antitrust litigation, and evaluation of pricing strategies.

    5. Use of model calibration to achieve high accuracy in analysis of computer networks

      DOE Patents [OSTI]

      Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

      2004-05-11

      A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

    6. Subsurface Multiphase Flow and Multicomponent Reactive Transport Modeling using High-Performance Computing

      SciTech Connect (OSTI)

      Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

      2007-07-16

      Numerical modeling has become a critical tool to the U.S. Department of Energy for evaluating the environmental impact of alternative energy sources and remediation strategies for legacy waste sites. Unfortunately, the physical and chemical complexity of many sites overwhelms the capabilities of even most state of the art groundwater models. Of particular concern are the representation of highly-heterogeneous stratified rock/soil layers in the subsurface and the biological and geochemical interactions of chemical species within multiple fluid phases. Clearly, there is a need for higher-resolution modeling (i.e. more spatial, temporal, and chemical degrees of freedom) and increasingly mechanistic descriptions of subsurface physicochemical processes. We present SciDAC-funded research being performed in the development of PFLOTRAN, a parallel multiphase flow and multicomponent reactive transport model. Written in Fortran90, PFLOTRAN is founded upon PETSc data structures and solvers. We are employing PFLOTRAN in the simulation of uranium transport at the Hanford 300 Area, a contaminated site of major concern to the Department of Energy, the State of Washington, and other government agencies. By leveraging the billions of degrees of freedom available through high-performance computation using tens of thousands of processors, we can better characterize the release of uranium into groundwater and its subsequent transport to the Columbia River, and thereby better understand and evaluate the effectiveness of various proposed remediation strategies.

    7. April 2013 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      R.A. (1997) 69 > Computational procedures for determining parameters in Ramberg-Osgood elastoplastic model based on modulus and damping versus strain Ueng, Tzou-Shin; Chen, ...

    8. September 2015 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      Not Available (1987) 89 Computational procedures for determining parameters in Ramberg-Osgood elastoplastic model based on modulus and damping versus strain Ueng, Tzou-Shin; Chen, ...

    9. COMPUTATIONAL THERMODYNAMIC MODELING OF HOT CORROSION OF ALLLOYS HAYNES 242 AND HASTELLOYTMN FOR MOLTEN SALT SERVICE

      SciTech Connect (OSTI)

      Michael V. Glazoff; Piyush Sabharwall; Akira Tokuhiro

      2014-09-01

      An evaluation of thermodynamic aspects of hot corrosion of the superalloys Haynes 242 and HastelloyTM N in the eutectic mixtures of KF and ZrF4 is carried out for development of Advanced High Temperature Reactor (AHTR). This work models the behavior of several superalloys, potential candidates for the AHTR, using computational thermodynamics tool (ThermoCalc), leading to the development of thermodynamic description of the molten salt eutectic mixtures, and on that basis, mechanistic prediction of hot corrosion. The results from these studies indicated that the principal mechanism of hot corrosion was associated with chromium leaching for all of the superalloys described above. However, HastelloyTM N displayed the best hot corrosion performance. This was not surprising given it was developed originally to withstand the harsh conditions of molten salt environment. However, the results obtained in this study provided confidence in the employed methods of computational thermodynamics and could be further used for future alloy design efforts. Finally, several potential solutions to mitigate hot corrosion were proposed for further exploration, including coating development and controlled scaling of intermediate compounds in the KF-ZrF4 system.

    10. Emergency Response Equipment and Related Training: Airborne Radiological Computer System (Model II)

      SciTech Connect (OSTI)

      David P. Colton

      2007-02-28

      The materials included in the Airborne Radiological Computer System, Model-II (ARCS-II) were assembled with several considerations in mind. First, the system was designed to measure and record the airborne gamma radiation levels and the corresponding latitude and longitude coordinates, and to provide a first overview look of the extent and severity of an accident's impact. Second, the portable system had to be light enough and durable enough that it could be mounted in an aircraft, ground vehicle, or watercraft. Third, the system must control the collection and storage of the data, as well as provide a real-time display of the data collection results to the operator. The notebook computer and color graphics printer components of the system would only be used for analyzing and plotting the data. In essence, the provided equipment is composed of an acquisition system and an analysis system. The data can be transferred from the acquisition system to the analysis system at the end of the data collection or at some other agreeable time.

    11. Extraction of actinides by multi-dentate diamides and their evaluation with computational molecular modeling

      SciTech Connect (OSTI)

      Sasaki, Y.; Kitatsuji, Y.; Hirata, M.; Kimura, T.; Yoshizuka, K.

      2008-07-01

      Multi-dentate diamides have been synthesized and examined for actinide (An) extractions. Bi- and tridentate extractants are the focus in this work. The extraction of actinides was performed from 0.1-6 M HNO{sub 3} to organic solvents. It was obvious that N,N,N',N'-tetra-alkyl-diglycolamide (DGA) derivatives, 2,2'-(methylimino)bis(N,N-dioctyl-acetamide) (MIDOA), and N,N'-dimethyl-N,N'-dioctyl-2-(3-oxa-pentadecane)-malonamide (DMDOOPDMA) have relatively high D values (D(Pu) > 70). The following notable results using DGA extractants were obtained: (1) DGAs with short alkyl chains give higher D values than those with long alkyl chain, (2) DGAs with long alkyl chain have high solubility in n-dodecane. Computational molecular modeling was also used to elucidate the effects of structural and electronic properties of the reagents on their different extractabilities. (authors)

    12. Introduction to Focus Issue: Rhythms and Dynamic Transitions in Neurological Disease: Modeling, Computation, and Experiment

      SciTech Connect (OSTI)

      Kaper, Tasso J. Kramer, Mark A.; Rotstein, Horacio G.

      2013-12-15

      Rhythmic neuronal oscillations across a broad range of frequencies, as well as spatiotemporal phenomena, such as waves and bumps, have been observed in various areas of the brain and proposed as critical to brain function. While there is a long and distinguished history of studying rhythms in nerve cells and neuronal networks in healthy organisms, the association and analysis of rhythms to diseases are more recent developments. Indeed, it is now thought that certain aspects of diseases of the nervous system, such as epilepsy, schizophrenia, Parkinson's, and sleep disorders, are associated with transitions or disruptions of neurological rhythms. This focus issue brings together articles presenting modeling, computational, analytical, and experimental perspectives about rhythms and dynamic transitions between them that are associated to various diseases.

    13. Subsurface Multiphase Flow and Multicomponent Reactive Transport Modeling using High-Performance Computing

      SciTech Connect (OSTI)

      Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

      2007-08-01

      Numerical modeling has become a critical tool to the Department of Energy for evaluating the environmental impact of alternative energy sources and remediation strategies for legacy waste sites. Unfortunately, the physical and chemical complexity of many sites overwhelms the capabilities of even most state of the art groundwater models. Of particular concern are the representation of highly-heterogeneous stratified rock/soil layers in the subsurface and the biological and geochemical interactions of chemical species within multiple fluid phases. Clearly, there is a need for higher-resolution modeling (i.e. more spatial, temporal, and chemical degrees of freedom) and increasingly mechanistic descriptions of subsurface physicochemical processes. We present research being performed in the development of PFLOTRAN, a parallel multiphase flow and multicomponent reactive transport model. Written in Fortran90, PFLOTRAN is founded upon PETSc data structures and solvers and has exhibited impressive strong scalability on up to 4000 processors on the ORNL Cray XT3. We are employing PFLOTRAN in the simulation of uranium transport at the Hanford 300 Area, a contaminated site of major concern to the Department of Energy, the State of Washington, and other government agencies where overly-simplistic historical modeling erroneously predicted decade removal times for uranium by ambient groundwater flow. By leveraging the billions of degrees of freedom available through high-performance computation using tens of thousands of processors, we can better characterize the release of uranium into groundwater and its subsequent transport to the Columbia River, and thereby better understand and evaluate the effectiveness of various proposed remediation strategies.

    14. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    15. Computer Modeling VRF Heat Pumps in Commercial Buildings using EnergyPlus

      SciTech Connect (OSTI)

      Raustad, Richard

      2013-06-01

      Variable Refrigerant Flow (VRF) heat pumps are increasingly used in commercial buildings in the United States. Monitored energy use of field installations have shown, in some cases, savings exceeding 30% compared to conventional heating, ventilating, and air-conditioning (HVAC) systems. A simulation study was conducted to identify the installation or operational characteristics that lead to energy savings for VRF systems. The study used the Department of Energy EnergyPlus? building simulation software and four reference building models. Computer simulations were performed in eight U.S. climate zones. The baseline reference HVAC system incorporated packaged single-zone direct-expansion cooling with gas heating (PSZ-AC) or variable-air-volume systems (VAV with reheat). An alternate baseline HVAC system using a heat pump (PSZ-HP) was included for some buildings to directly compare gas and electric heating results. These baseline systems were compared to a VRF heat pump model to identify differences in energy use. VRF systems combine multiple indoor units with one or more outdoor unit(s). These systems move refrigerant between the outdoor and indoor units which eliminates the need for duct work in most cases. Since many applications install duct work in unconditioned spaces, this leads to installation differences between VRF systems and conventional HVAC systems. To characterize installation differences, a duct heat gain model was included to identify the energy impacts of installing ducts in unconditioned spaces. The configuration of variable refrigerant flow heat pumps will ultimately eliminate or significantly reduce energy use due to duct heat transfer. Fan energy is also studied to identify savings associated with non-ducted VRF terminal units. VRF systems incorporate a variable-speed compressor which may lead to operational differences compared to single-speed compression systems. To characterize operational differences, the computer model performance curves used to simulate cooling operation are also evaluated. The information in this paper is intended to provide a relative difference in system energy use and compare various installation practices that can impact performance. Comparative results of VRF versus conventional HVAC systems include energy use differences due to duct location, differences in fan energy when ducts are eliminated, and differences associated with electric versus fossil fuel type heating systems.

    16. Development of Computational Tools for Metabolic Model Curation, Flux Elucidation and Strain Design

      SciTech Connect (OSTI)

      Maranas, Costas D

      2012-05-21

      An overarching goal of the Department of Energy™ mission is the efficient deployment and engineering of microbial and plant systems to enable biomass conversion in pursuit of high energy density liquid biofuels. This has spurred the pace at which new organisms are sequenced and annotated. This torrent of genomic information has opened the door to understanding metabolism in not just skeletal pathways and a handful of microorganisms but for truly genome-scale reconstructions derived for hundreds of microbes and plants. Understanding and redirecting metabolism is crucial because metabolic fluxes are unique descriptors of cellular physiology that directly assess the current cellular state and quantify the effect of genetic engineering interventions. At the same time, however, trying to keep pace with the rate of genomic data generation has ushered in a number of modeling and computational challenges related to (i) the automated assembly, testing and correction of genome-scale metabolic models, (ii) metabolic flux elucidation using labeled isotopes, and (iii) comprehensive identification of engineering interventions leading to the desired metabolism redirection.

    17. Wind Turbine Modeling for Computational Fluid Dynamics: December 2010 - December 2012

      SciTech Connect (OSTI)

      Tossas, L. A. M.; Leonardi, S.

      2013-07-01

      With the shortage of fossil fuel and the increasing environmental awareness, wind energy is becoming more and more important. As the market for wind energy grows, wind turbines and wind farms are becoming larger. Current utility-scale turbines extend a significant distance into the atmospheric boundary layer. Therefore, the interaction between the atmospheric boundary layer and the turbines and their wakes needs to be better understood. The turbulent wakes of upstream turbines affect the flow field of the turbines behind them, decreasing power production and increasing mechanical loading. With a better understanding of this type of flow, wind farm developers could plan better-performing, less maintenance-intensive wind farms. Simulating this flow using computational fluid dynamics is one important way to gain a better understanding of wind farm flows. In this study, we compare the performance of actuator disc and actuator line models in producing wind turbine wakes and the wake-turbine interaction between multiple turbines. We also examine parameters that affect the performance of these models, such as grid resolution, the use of a tip-loss correction, and the way in which the turbine force is projected onto the flow field.

    18. Computational Nanophotonics: Model Optical Interactions and Transport in Tailored Nanosystem Architectures

      SciTech Connect (OSTI)

      Stockman, Mark; Gray, Steven

      2014-02-21

      The program is directed toward development of new computational approaches to photoprocesses in nanostructures whose geometry and composition are tailored to obtain desirable optical responses. The emphasis of this specific program is on the development of computational methods and prediction and computational theory of new phenomena of optical energy transfer and transformation on the extreme nanoscale (down to a few nanometers).

    19. The Use Of Computational Human Performance Modeling As Task Analysis Tool

      SciTech Connect (OSTI)

      Jacuqes Hugo; David Gertman

      2012-07-01

      During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employed to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.

    20. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

      SciTech Connect (OSTI)

      Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

      2006-10-01

      Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

    1. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

      SciTech Connect (OSTI)

      Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei

      2015-01-15

      Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm, and 1.06 ± 0.40 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the premolar were 37.95 ± 10.13 mm{sup 3}, 92.45 ± 2.29%, 0.29 ± 0.06 mm, 0.33 ± 0.10 mm, and 1.28 ± 0.72 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the molar were 52.38 ± 17.27 mm{sup 3}, 94.12 ± 1.38%, 0.30 ± 0.08 mm, 0.35 ± 0.17 mm, and 1.52 ± 0.75 mm, respectively. The computation time of the proposed method for segmenting CBCT images of one subject was 7.25 ± 0.73 min. Compared with two other methods, the proposed method achieves significant improvement in terms of accuracy. Conclusions: The presented tooth segmentation method can be used to segment tooth contours from CT images accurately and efficiently.

    2. Unveiling Stability Criteria of DNA-Carbon Nanotubes Constructs by Scanning Tunneling Microscopy and Computational Modeling

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Kilina, Svetlana; Yarotski, Dzmitry A.; Talin, A. Alec; Tretiak, Sergei; Taylor, Antoinette J.; Balatsky, Alexander V.

      2011-01-01

      We present a combined approach that relies on computational simulations and scanning tunneling microscopy (STM) measurements to reveal morphological properties and stability criteria of carbon nanotube-DNA (CNT-DNA) constructs. Application of STM allows direct observation of very stable CNT-DNA hybrid structures with the well-defined DNA wrapping angle of 63.4 ° and a coiling period of 3.3 nm. Using force field simulations, we determine how the DNA-CNT binding energy depends on the sequence and binding geometry of a single strand DNA. This dependence allows us to quantitatively characterize the stability of a hybrid structure with an optimal π-stacking between DNA nucleotides andmore » the tube surface and better interpret STM data. Our simulations clearly demonstrate the existence of a very stable DNA binding geometry for (6,5) CNT as evidenced by the presence of a well-defined minimum in the binding energy as a function of an angle between DNA strand and the nanotube chiral vector. This novel approach demonstrates the feasibility of CNT-DNA geometry studies with subnanometer resolution and paves the way towards complete characterization of the structural and electronic properties of drug-delivering systems based on DNA-CNT hybrids as a function of DNA sequence and a nanotube chirality.« less

    3. Computational Modeling of Fluid Flow through a Fracture in Permeable Rock

      SciTech Connect (OSTI)

      Crandall, Dustin; Ahmadi, Goodarz; Smith, Duane H

      2010-01-01

      Laminar, single-phase, finite-volume solutions to the NavierStokes equations of fluid flow through a fracture within permeable media have been obtained. The fracture geometry was acquired from computed tomography scans of a fracture in Berea sandstone, capturing the small-scale roughness of these natural fluid conduits. First, the roughness of the two-dimensional fracture profiles was analyzed and shown to be similar to Brownian fractal structures. The permeability and tortuosity of each fracture profile was determined from simulations of fluid flow through these geometries with impermeable fracture walls. A surrounding permeable medium, assumed to obey Darcys Law with permeabilities from 0.2 to 2,000 millidarcies, was then included in the analysis. A series of simulations for flows in fractured permeable rocks was performed, and the results were used to develop a relationship between the flow rate and pressure loss for fractures in porous rocks. The resulting frictionfactor, which accounts for the fracture geometric properties, is similar to the cubic law; it has the potential to be of use in discrete fracture reservoir-scale simulations of fluid flow through highly fractured geologic formations with appreciable matrix permeability. The observed fluid flow from the surrounding permeable medium to the fracture was significant when the resistance within the fracture and the medium were of the same order. An increase in the volumetric flow rate within the fracture profile increased by more than 5% was observed for flows within high permeability-fractured porous media.

    4. DEVELOPMENT OF A COMPUTATIONAL MULTIPHASE FLOW MODEL FOR FISCHER TROPSCH SYNTHESIS IN A SLURRY BUBBLE COLUMN REACTOR

      SciTech Connect (OSTI)

      Donna Post Guillen; Tami Grimmett; Anastasia M. Gribik; Steven P. Antal

      2010-09-01

      The Hybrid Energy Systems Testing (HYTEST) Laboratory is being established at the Idaho National Laboratory to develop and test hybrid energy systems with the principal objective to safeguard U.S. Energy Security by reducing dependence on foreign petroleum. A central component of the HYTEST is the slurry bubble column reactor (SBCR) in which the gas-to-liquid reactions will be performed to synthesize transportation fuels using the Fischer Tropsch (FT) process. SBCRs are cylindrical vessels in which gaseous reactants (for example, synthesis gas or syngas) is sparged into a slurry of liquid reaction products and finely dispersed catalyst particles. The catalyst particles are suspended in the slurry by the rising gas bubbles and serve to promote the chemical reaction that converts syngas to a spectrum of longer chain hydrocarbon products, which can be upgraded to gasoline, diesel or jet fuel. These SBCRs operate in the churn-turbulent flow regime which is characterized by complex hydrodynamics, coupled with reacting flow chemistry and heat transfer, that effect reactor performance. The purpose of this work is to develop a computational multiphase fluid dynamic (CMFD) model to aid in understanding the physico-chemical processes occurring in the SBCR. Our team is developing a robust methodology to couple reaction kinetics and mass transfer into a four-field model (consisting of the bulk liquid, small bubbles, large bubbles and solid catalyst particles) that includes twelve species: (1) CO reactant, (2) H2 reactant, (3) hydrocarbon product, and (4) H2O product in small bubbles, large bubbles, and the bulk fluid. Properties of the hydrocarbon product were specified by vapor liquid equilibrium calculations. The absorption and kinetic models, specifically changes in species concentrations, have been incorporated into the mass continuity equation. The reaction rate is determined based on the macrokinetic model for a cobalt catalyst developed by Yates and Satterfield [1]. The model includes heat generation due to the exothermic chemical reaction, as well as heat removal from a constant temperature heat exchanger. Results of the CMFD simulations (similar to those shown in Figure 1) will be presented.

    5. Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop

      SciTech Connect (OSTI)

      Musial, W.; Lawson, M.; Rooney, S.

      2013-02-01

      The Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop was hosted by the National Renewable Energy Laboratory (NREL) in Broomfield, Colorado, July 9–10, 2012. The workshop brought together over 60 experts in marine energy technologies to disseminate technical information to the marine energy community, and to collect information to help identify ways in which the development of a commercially viable marine energy industry can be accelerated. The workshop was comprised of plenary sessions that reviewed the state of the marine energy industry and technical sessions that covered specific topics of relevance. Each session consisted of presentations, followed by facilitated discussions. During the facilitated discussions, the session chairs posed several prepared questions to the presenters and audience to encourage communication and the exchange of ideas between technical experts. Following the workshop, attendees were asked to provide written feedback on their takeaways from the workshop and their best ideas on how to accelerate the pace of marine energy technology development. The first four sections of this document give a general overview of the workshop format, provide presentation abstracts, supply discussion session notes, and list responses to the post-workshop questions. The final section presents key findings and conclusions from the workshop that suggest what the most pressing MHK technology needs are and how the U.S. Department of Energy (DOE) and national laboratory resources can be utilized to assist the marine energy industry in the most effective manner.

    6. Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop

      SciTech Connect (OSTI)

      Musial, W.; Lawson, M.; Rooney, S.

      2013-02-01

      The Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop was hosted by the National Renewable Energy Laboratory (NREL) in Broomfield, Colorado, July 9-10, 2012. The workshop brought together over 60 experts in marine energy technologies to disseminate technical information to the marine energy community and collect information to help identify ways in which the development of a commercially viable marine energy industry can be accelerated. The workshop was comprised of plenary sessions that reviewed the state of the marine energy industry and technical sessions that covered specific topics of relevance. Each session consisted of presentations, followed by facilitated discussions. During the facilitated discussions, the session chairs posed several prepared questions to the presenters and audience to encourage communication and the exchange of ideas between technical experts. Following the workshop, attendees were asked to provide written feedback on their takeaways and their best ideas on how to accelerate the pace of marine energy technology development. The first four sections of this document give a general overview of the workshop format, provide presentation abstracts and discussion session notes, and list responses to the post-workshop questions. The final section presents key findings and conclusions from the workshop that suggest how the U.S. Department of Energy and national laboratory resources can be utilized to most effectively assist the marine energy industry.

    7. Transfer matrix computation of critical polynomials for two-dimensional Potts models

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Jacobsen, Jesper Lykke; Scullard, Christian R.

      2013-02-04

      We showed, In our previous work, that critical manifolds of the q-state Potts model can be studied by means of a graph polynomial PB(q, v), henceforth referred to as the critical polynomial. This polynomial may be defined on any periodic two-dimensional lattice. It depends on a finite subgraph B, called the basis, and the manner in which B is tiled to construct the lattice. The real roots v = eK — 1 of PB(q, v) either give the exact critical points for the lattice, or provide approximations that, in principle, can be made arbitrarily accurate by increasing the size ofmore » B in an appropriate way. In earlier work, PB(q, v) was defined by a contraction-deletion identity, similar to that satisfied by the Tutte polynomial. Here, we give a probabilistic definition of PB(q, v), which facilitates its computation, using the transfer matrix, on much larger B than was previously possible.We present results for the critical polynomial on the (4, 82), kagome, and (3, 122) lattices for bases of up to respectively 96, 162, and 243 edges, compared to the limit of 36 edges with contraction-deletion. We discuss in detail the role of the symmetries and the embedding of B. The critical temperatures vc obtained for ferromagnetic (v > 0) Potts models are at least as precise as the best available results from Monte Carlo simulations or series expansions. For instance, with q = 3 we obtain vc(4, 82) = 3.742 489 (4), vc(kagome) = 1.876 459 7 (2), and vc(3, 122) = 5.033 078 49 (4), the precision being comparable or superior to the best simulation results. More generally, we trace the critical manifolds in the real (q, v) plane and discuss the intricate structure of the phase diagram in the antiferromagnetic (v < 0) region.« less

    8. Computer modeling of electrical and thermal performance during bipolar pulsed radiofrequency for pain relief

      SciTech Connect (OSTI)

      Pérez, Juan J.; Pérez-Cajaraville, Juan J.; Muñoz, Víctor; Berjano, Enrique

      2014-07-15

      Purpose: Pulsed RF (PRF) is a nonablative technique for treating neuropathic pain. Bipolar PRF application is currently aimed at creating a “strip lesion” to connect the electrode tips; however, the electrical and thermal performance during bipolar PRF is currently unknown. The objective of this paper was to study the temperature and electric field distributions during bipolar PRF. Methods: The authors developed computer models to study temperature and electric field distributions during bipolar PRF and to assess the possible ablative thermal effect caused by the accumulated temperature spikes, along with any possible electroporation effects caused by the electrical field. The authors also modeled the bipolar ablative mode, known as bipolar Continuous Radiofrequency (CRF), in order to compare both techniques. Results: There were important differences between CRF and PRF in terms of electrical and thermal performance. In bipolar CRF: (1) the initial temperature of the tissue impacts on temperature progress and hence on the thermal lesion dimension; and (2) at 37 °C, 6-min of bipolar CRF creates a strip thermal lesion between the electrodes when these are separated by a distance of up to 20 mm. In bipolar PRF: (1) an interelectrode distance shorter than 5 mm produces thermal damage (i.e., ablative effect) in the intervening tissue after 6 min of bipolar RF; and (2) the possible electroporation effect (electric fields higher than 150 kV m{sup −1}) would be exclusively circumscribed to a very small zone of tissue around the electrode tip. Conclusions: The results suggest that (1) the clinical parameters considered to be suitable for bipolar CRF should not necessarily be considered valid for bipolar PRF, and vice versa; and (2) the ablative effect of the CRF mode is mainly due to its much greater level of delivered energy than is the case in PRF, and therefore at same applied energy levels, CRF, and PRF are expected to result in same outcomes in terms of thermal damage zone dimension.

    9. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

      SciTech Connect (OSTI)

      Jablonowski, Christiane

      2015-07-14

      The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.

    10. Determining Columbia and Snake River Project Tailrace and Forebay Zones of Hydraulic Influence using MASS2 Modeling

      SciTech Connect (OSTI)

      Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.; Perkins, William A.

      2010-12-01

      Although fisheries biology studies are frequently performed at US Army Corps of Engineers (USACE) projects along the Columbia and Snake Rivers, there is currently no consistent definition of the ``forebay'' and ``tailrace'' regions for these studies. At this time, each study may use somewhat arbitrary lines (e.g., the Boat Restriction Zone) to define the upstream and downstream limits of the study, which may be significantly different at each project. Fisheries researchers are interested in establishing a consistent definition of project forebay and tailrace regions for the hydroelectric projects on the lower Columbia and Snake rivers. The Hydraulic Extent of a project was defined by USACE (Brad Eppard, USACE-CENWP) as follows: The river reach directly upstream (forebay) and downstream (tailrace) of a project that is influenced by the normal range of dam operations. Outside this reach, for a particular river discharge, changes in dam operations cannot be detected by hydraulic measurement. The purpose of this study was to, in consultation with USACE and regional representatives, develop and apply a consistent set of criteria for determining the hydraulic extent of each of the projects in the lower Columbia and Snake rivers. A 2D depth-averaged river model, MASS2, was applied to the Snake and Columbia Rivers. New computational meshes were developed most reaches and the underlying bathymetric data updated to the most current survey data. The computational meshes resolved each spillway bay and turbine unit at each project and extended from project to project. MASS2 was run for a range of total river flows and each flow for a range of project operations at each project. The modeled flow was analyzed to determine the range of velocity magnitude differences and the range of flow direction differences at each location in the computational mesh for each total river flow. Maps of the differences in flow direction and velocity magnitude were created. USACE fishery biologists requested data analysis to determine the project hydraulic extent based on the following criteria: 1) For areas where the mean velocities are less than 4 ft/s, the water velocity differences between operations are not greater than 0.5 ft/sec and /or the differences in water flow direction are not greater than 10 degrees, 2) If mean water velocity is 4.0 ft/second or greater the boundary is determined using the differences in water flow direction (i.e., not greater than 10 degrees). Based on these criteria, and excluding areas with a mean velocity of less than 0.1 ft/s (within the error of the model), a final set of graphics were developed that included data from all flows and all operations. Although each hydroelectric project has a different physical setting, there were some common results. The downstream hydraulic extent tended to be greater than the hydraulic extent in the forebay. The hydraulic extent of the projects tended to be larger at the mid-range flows. At higher flows, the channel geometry tends to reduce the impact of project operations.

    11. Computation Results from a Parametric Study to Determine Bounding Critical Systems of Homogeneously Water-Moderated Mixed Plutonium--Uranium Oxides

      SciTech Connect (OSTI)

      Shimizu, Y.

      2001-01-11

      This report provides computational results of an extensive study to examine the following: (1) infinite media neutron-multiplication factors; (2) material bucklings; (3) bounding infinite media critical concentrations; (4) bounding finite critical dimensions of water-reflected and homogeneously water-moderated one-dimensional systems (i.e., spheres, cylinders of infinite length, and slabs that are infinite in two dimensions) that were comprised of various proportions and densities of plutonium oxides and uranium oxides, each having various isotopic compositions; and (5) sensitivity coefficients of delta k-eff with respect to critical geometry delta dimensions were determined for each of the three geometries that were studied. The study was undertaken to support the development of a standard that is sponsored by the International Standards Organization (ISO) under Technical Committee 85, Nuclear Energy (TC 85)--Subcommittee 5, Nuclear Fuel Technology (SC 5)--Working Group 8, Standardization of Calculations, Procedures and Practices Related to Criticality Safety (WG 8). The designation and title of the ISO TC 85/SC 5/WG 8 standard working draft is WD 14941, ''Nuclear energy--Fissile materials--Nuclear criticality control and safety of plutonium-uranium oxide fuel mixtures outside of reactors.'' Various ISO member participants performed similar computational studies using their indigenous computational codes to provide comparative results for analysis in the development of the standard.

    12. International Nuclear Energy Research Initiative Development of Computational Models for Pyrochemical Electrorefiners of Nuclear Waste Transmutation Systems

      SciTech Connect (OSTI)

      M.F. Simpson; K.-R. Kim

      2010-12-01

      In support of closing the nuclear fuel cycle using non-aqueous separations technology, this project aims to develop computational models of electrorefiners based on fundamental chemical and physical processes. Spent driver fuel from Experimental Breeder Reactor-II (EBR-II) is currently being electrorefined in the Fuel Conditioning Facility (FCF) at Idaho National Laboratory (INL). And Korea Atomic Energy Research Institute (KAERI) is developing electrorefining technology for future application to spent fuel treatment and management in the Republic of Korea (ROK). Electrorefining is a critical component of pyroprocessing, a non-aqueous chemical process which separates spent fuel into four streams: (1) uranium metal, (2) U/TRU metal, (3) metallic high-level waste containing cladding hulls and noble metal fission products, and (4) ceramic high-level waste containing sodium and active metal fission products. Having rigorous yet flexible electrorefiner models will facilitate process optimization and assist in trouble-shooting as necessary. To attain such models, INL/UI has focused on approaches to develop a computationally-light and portable two-dimensional (2D) model, while KAERI/SNU has investigated approaches to develop a computationally intensive three-dimensional (3D) model for detailed and fine-tuned simulation.

    13. Development and Verification of a Computational Fluid Dynamics Model of a Horizontal-Axis Tidal Current Turbine

      SciTech Connect (OSTI)

      Lawson, Mi. J.; Li, Y.; Sale, D. C.

      2011-01-01

      This paper describes the development of a computational fluid dynamics (CFD) methodology to simulate the hydrodynamics of horizontal-axis tidal current turbines (HATTs). First, an HATT blade was designed using the blade element momentum method in conjunction with a genetic optimization algorithm. Several unstructured computational grids were generated using this blade geometry and steady CFD simulations were used to perform a grid resolution study. Transient simulations were then performed to determine the effect of time-dependent flow phenomena and the size of the computational timestep on the numerical solution. Qualitative measures of the CFD solutions were independent of the grid resolution. Conversely, quantitative comparisons of the results indicated that the use of coarse computational grids results in an under prediction of the hydrodynamic forces on the turbine blade in comparison to the forces predicted using more resolved grids. For the turbine operating conditions considered in this study, the effect of the computational timestep on the CFD solution was found to be minimal, and the results from steady and transient simulations were in good agreement. Additionally, the CFD results were compared to corresponding blade element momentum method calculations and reasonable agreement was shown. Nevertheless, we expect that for other turbine operating conditions, where the flow over the blade is separated, transient simulations will be required.

    14. Sandia National Laboratories: Advanced Simulation and Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ASC Advanced Simulation and Computing Computational Systems & Software Environment Crack Modeling The Computational Systems & Software Environment program builds integrated,...

    15. Overview of Computer-Aided Engineering of Batteries and Introduction to Multi-Scale, Multi-Dimensional Modeling of Li-Ion Batteries (Presentation)

      SciTech Connect (OSTI)

      Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.; Lee, K. J.

      2012-05-01

      This 2012 Annual Merit Review presentation gives an overview of the Computer-Aided Engineering of Batteries (CAEBAT) project and introduces the Multi-Scale, Multi-Dimensional model for modeling lithium-ion batteries for electric vehicles.

    16. Computation of Domain-Averaged Irradiance with a Simple Two-Stream Radiative Transfer Model Including Vertical Cloud Property Correlations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computation of Domain-Averaged Irradiance with a Simple Two-Stream Radiative Transfer Model Including Vertical Cloud Property Correlations S. Kato Center for Atmospheric Sciences Hampton University Hampton, Virginia Introduction Recent development of remote sensing instruments by Atmospheric Radiation Measurement (ARM?) Program provides information of spatial and temporal variability of cloud structures. However it is not clear what cloud properties are required to express complicated cloud

    17. Coupling of Mechanical Behavior of Cell Components to Electrochemical-Thermal Models for Computer- Aided Engineering of Batteries under Abuse

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Coupling of Mechanical Behavior of Cell Components to Electrochemical-Thermal Models for Computer- Aided Engineering of Batteries under Abuse P.I.: Ahmad Pesaran Team: Tomasz Wierzbicki and Elham Sahraei (MIT) Genong Li and Lewis Collins (ANSYS) M. Sprague, G.H. Kim and S. Santhangopalan (NREL) June 17, 2014 This presentation does not contain any proprietary, confidential, or otherwise restricted information. Project ID: ES199 NREL/PR-5400-61885 2 Overview * Project Start: October 2013 * Project

    18. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      McClanahan, Richard; De Leon, Phillip L.

      2014-08-20

      The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

    19. BLENDING STUDY FOR SRR SALT DISPOSITION INTEGRATION: TANK 50H SCALE-MODELING AND COMPUTER-MODELING FOR BLENDING PUMP DESIGN, PHASE 2

      SciTech Connect (OSTI)

      Leishear, R.; Poirier, M.; Fowley, M.

      2011-05-26

      The Salt Disposition Integration (SDI) portfolio of projects provides the infrastructure within existing Liquid Waste facilities to support the startup and long term operation of the Salt Waste Processing Facility (SWPF). Within SDI, the Blend and Feed Project will equip existing waste tanks in the Tank Farms to serve as Blend Tanks where 300,000-800,000 gallons of salt solution will be blended in 1.3 million gallon tanks and qualified for use as feedstock for SWPF. Blending requires the miscible salt solutions from potentially multiple source tanks per batch to be well mixed without disturbing settled sludge solids that may be present in a Blend Tank. Disturbing solids may be problematic both from a feed quality perspective as well as from a process safety perspective where hydrogen release from the sludge is a potential flammability concern. To develop the necessary technical basis for the design and operation of blending equipment, Savannah River National Laboratory (SRNL) completed scaled blending and transfer pump tests and computational fluid dynamics (CFD) modeling. A 94 inch diameter pilot-scale blending tank, including tank internals such as the blending pump, transfer pump, removable cooling coils, and center column, were used in this research. The test tank represents a 1/10.85 scaled version of an 85 foot diameter, Type IIIA, nuclear waste tank that may be typical of Blend Tanks used in SDI. Specifically, Tank 50 was selected as the tank to be modeled per the SRR, Project Engineering Manager. SRNL blending tests investigated various fixed position, non-rotating, dual nozzle pump designs, including a blending pump model provided by the blend pump vendor, Curtiss Wright (CW). Primary research goals were to assess blending times and to evaluate incipient sludge disturbance for waste tanks. Incipient sludge disturbance was defined by SRR and SRNL as minor blending of settled sludge from the tank bottom into suspension due to blending pump operation, where the sludge level was shown to remain constant. To experimentally model the sludge layer, a very thin, pourable, sludge simulant was conservatively used for all testing. To experimentally model the liquid, supernate layer above the sludge in waste tanks, two salt solution simulants were used, which provided a bounding range of supernate properties. One solution was water (H{sub 2}O + NaOH), and the other was an inhibited, more viscous salt solution. The research performed and data obtained significantly advances the understanding of fluid mechanics, mixing theory and CFD modeling for nuclear waste tanks by benchmarking CFD results to actual experimental data. This research significantly bridges the gap between previous CFD models and actual field experiences in real waste tanks. A finding of the 2009, DOE, Slurry Retrieval, Pipeline Transport and Plugging, and Mixing Workshop was that CFD models were inadequate to assess blending processes in nuclear waste tanks. One recommendation from that Workshop was that a validation, or bench marking program be performed for CFD modeling versus experiment. This research provided experimental data to validate and correct CFD models as they apply to mixing and blending in nuclear waste tanks. Extensive SDI research was a significant step toward bench marking and applying CFD modeling. This research showed that CFD models not only agreed with experiment, but demonstrated that the large variance in actual experimental data accounts for misunderstood discrepancies between CFD models and experiments. Having documented this finding, SRNL was able to provide correction factors to be used with CFD models to statistically bound full scale CFD results. Through the use of pilot scale tests performed for both types of pumps and available engineering literature, SRNL demonstrated how to effectively apply CFD results to salt batch mixing in full scale waste tanks. In other words, CFD models were in error prior to development of experimental correction factors determined during this research, which provided a technique to use CFD models for salt batch mixing and transfer pump operations. This major scientific advance in mixing technology resulted in multi-million dollar cost savings to SRR. New techniques were developed for both experiment and analysis to complete this research. Supporting this success, research findings are summarized in the Conclusions section of this report, and technical recommendations for design and operation are included in this section of the report.

    20. CORCON-MOD3: An integrated computer model for analysis of molten core-concrete interactions. User`s manual

      SciTech Connect (OSTI)

      Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O.

      1993-10-01

      The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.

    1. Experimental determination of lead carbonate solubility at high ionic strengths: A Pitzer model description

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Xiong, Yongliang

      2015-05-06

      In this article, solubility measurements of lead carbonate, PbCO3(cr), cerussite, as a function of total ionic strengths are conducted in the mixtures of NaCl and NaHCO3 up to I = 1.2 mol•kg–1 and in the mixtures of NaHCO3 and Na2CO3 up to I = 5.2 mol•kg–1, at room temperature (22.5 ± 0.5 °C). The solubility constant (log Ksp) for cerussite, PbCO3(cr) = Pb2+ + CO32- was determined as –13.76 ± 0.15 (2σ) with a set of Pitzer parameters describing the specific interactions of PbCO3(aq), Pb(CO3)22-, and Pb(CO3)Cl– with the bulk-supporting electrolytes, based on the Pitzer model. The model developed inmore » this work can reproduce the experimental results including model-independent solubility values from the literature over a wide range of ionic strengths with satisfactory accuracy. The model is expected to find applications in numerous fields, including the accurate description of chemical behavior of lead in geological repositories, the modeling of formation of oxidized Pb–Zn ore deposits, and the environmental remediation of lead contamination.« less

    2. A non-CFD modeling system for computing 3D wind and concentration fields in urban environments

      SciTech Connect (OSTI)

      Nelson, Matthew A; Brown, Michael J; Williams, Michael D; Gowardhan, Akshay; Pardyjak, Eric R

      2010-01-01

      The Quick Urban & Industrial Complex (QUIC) Dispersion Modeling System has been developed to rapidly compute the transport and dispersion of toxic agent releases in the vicinity of buildings. It is composed of an empirical-diagnostic wind solver, an 'urbanized' Lagrangian random-walk model, and a graphical user interface. The code has been used for homeland security and environmental air pollution applications. In this paper, we discuss the wind solver methodology and improvements made to the original Roeckle schemes in order to better capture flow fields in dense built-up areas. The mode1-computed wind and concentration fields are then compared to measurements from several field experiments. Improvements to the QUIC Dispersion Modeling System have been made to account for the inhomogeneous and complex building layouts found in large cities. The logic that has been introduced into the code is described and comparisons of model output to full-scale outdoor urban measurements in Oklahoma City and New York City are given. Although far from perfect, the model agreed fairly well with measurements and in many cases performed equally to CFD codes.

    3. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      New Project Is the ACME of Computer Science to Address Climate Change Analysis, Climate, Global Climate & Energy, Modeling, Modeling & Analysis, News, News & Events, Partnership New Project Is the ACME of Computer Science to Address Climate Change Sandia high-performance computing (HPC) researchers are working with DOE and 14 other national laboratories and institutions to develop and apply the most complete climate and Earth system model, to address the most challenging and

    4. Flow field computation of the NREL S809 airfoil using various turbulence models

      SciTech Connect (OSTI)

      Chang, Y.L.; Yang, S.L.; Arici, O. [Michigan Technological Univ., Houghton, MI (United States). Mechanical Engineering-Engineering Mechanics Dept.

      1996-10-01

      Performance comparison of three popular turbulence models, namely Baldwin-Lomas algebraic model, Chien`s Low-Reynolds-Number {kappa}-{epsilon} model, and Wilcox`s Low-Reynolds-Number {kappa}-{omega} model, is given. These models were applied to calculate the flow field around the National Renewable Energy Laboratory S809 airfoil using Total Variational Diminishing scheme. Numerical results of C{sub P}, C{sub L}, and C{sub D} are presented along with the Delft experimental data. It is shown that all three models perform well for attached flow, i.e., no flow separation at low angles of attack. However, at high angles of attack with flow separation, convergence characteristics show Wilcox`s model outperforms the other models. Results of this study will be used to guide the authors in their dynamic stall research.

    5. Application of high performance computing to automotive design and manufacturing: Composite materials modeling task technical manual for constitutive models for glass fiber-polymer matrix composites

      SciTech Connect (OSTI)

      Simunovic, S; Zacharia, T

      1997-11-01

      This report provides a theoretical background for three constitutive models for a continuous strand mat (CSM) glass fiber-thermoset polymer matrix composite. The models were developed during fiscal years 1994 through 1997 as a part of the Cooperative Research and Development Agreement, "Application of High-Performance Computing to Automotive Design and Manufacturing." The full derivation of constitutive relations in the framework of the continuum program DYNA3D and have been used for the simulation and impact analysis of CSM composite tubes. The analysis of simulation and experimental results show that the model based on strain tensor split yields the most accurate results of the three implemented models. The parameters used in the models and their derivation from the physical tests are documented.

    6. Experimental determination of electrical characteristics and circuit models of superconducting dipole magnets

      SciTech Connect (OSTI)

      Smedley, K.M. . Dept. of Electrical and Computer Engineering); Shafer, R.E. )

      1994-09-01

      Superconducting magnets are used very extensively in modern science and technology to produce high intensity magnetic fields. In many application, magnets are used in non-dc conditions and are subject to current ramping. The magnets studied in this paper were intended to be used in the Superconducting Supercollider with a ramping rate of 4A/sec and a maximum current of 7000A. Due to the effects of eddy currents and parasitic capacitance, the electrical characteristics of superconducting magnets are not completely inductive; instead, they are frequency dependent functions. This paper develops a method of accurately measuring the ac characteristics and determining the circuit models of superconducting magnets that characterize the eddy currents and the parasitic capacitance. This measurement method can be used to analyze eddy current and the resulting circuit model can be used to study the transmission line effect of long magnet strings.

    7. Making a Computer Model of the Most Complex System Ever Built - Continuum

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Magazine | NREL photo of a man and a woman pointing to a large computer screen, which shows an advanced systems analysis of U.S. power systems with examples of high area renewables. David Mooney, director for NREL's Strategic Energy Analysis Center (left), and Robin Newmark, NREL's associate director for Energy Analysis and Decision Support (right), examine an advanced systems analysis of the impacts of high penetrations of renewable energy on the U.S. electrical grid. Photo by Dennis

    8. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

      SciTech Connect (OSTI)

      Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

      2015-01-01

      The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

    9. Computational modeling of electrostatic charge and fields produced by hypervelocity impact

      SciTech Connect (OSTI)

      Crawford, David A.

      2015-05-19

      Following prior experimental evidence of electrostatic charge separation, electric and magnetic fields produced by hypervelocity impact, we have developed a model of electrostatic charge separation based on plasma sheath theory and implemented it into the CTH shock physics code. Preliminary assessment of the model shows good qualitative and quantitative agreement between the model and prior experiments at least in the hypervelocity regime for the porous carbonate material tested. The model agrees with the scaling analysis of experimental data performed in the prior work, suggesting that electric charge separation and the resulting electric and magnetic fields can be a substantial effect at larger scales, higher impact velocities, or both.

    10. Computational model, method, and system for kinetically-tailoring multi-drug chemotherapy for individuals

      DOE Patents [OSTI]

      Gardner, Shea Nicole (San Leandro, CA)

      2007-10-23

      A method and system for tailoring treatment regimens to individual patients with diseased cells exhibiting evolution of resistance to such treatments. A mathematical model is provided which models rates of population change of proliferating and quiescent diseased cells using cell kinetics and evolution of resistance of the diseased cells, and pharmacokinetic and pharmacodynamic models. Cell kinetic parameters are obtained from an individual patient and applied to the mathematical model to solve for a plurality of treatment regimens, each having a quantitative efficacy value associated therewith. A treatment regimen may then be selected from the plurlaity of treatment options based on the efficacy value.

    11. Protein superfamily members as targets for computer modeling: The carbohydrate recognition domain of a macrophage lectin

      SciTech Connect (OSTI)

      Stenkamp, R.E.; Aruffo, A.; Bajorath, J.

      1996-12-31

      Members of protein superfamilies display similar folds, but share only limited sequence identity, often 25% or less. Thus, it is not straightforward to apply standard homology modeling methods to construct reliable three-dimensional models of such proteins. A three-dimensional model of the carbohydrate recognition domain of the rat macrophage lectin, a member of the calcium-dependent (C-type) lectin superfamily, has been generated to illustrate how information provided by comparison of X-ray structures and sequence-structure alignments can aid in comparative modeling when primary sequence similarities are low. 20 refs., 4 figs.

    12. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

      SciTech Connect (OSTI)

      Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.; Ivanov, K. N.

      2012-07-01

      A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied. (authors)

    13. Climate Models: Rob Jacob | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      --Tribology -Mathematics, computing, & computer science --Cloud computing --Modeling, simulation, & visualization --Petascale & exascale computing --Supercomputing &...

    14. Computational modeling of electrostatic charge and fields produced by hypervelocity impact

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Crawford, David A.

      2015-05-19

      Following prior experimental evidence of electrostatic charge separation, electric and magnetic fields produced by hypervelocity impact, we have developed a model of electrostatic charge separation based on plasma sheath theory and implemented it into the CTH shock physics code. Preliminary assessment of the model shows good qualitative and quantitative agreement between the model and prior experiments at least in the hypervelocity regime for the porous carbonate material tested. The model agrees with the scaling analysis of experimental data performed in the prior work, suggesting that electric charge separation and the resulting electric and magnetic fields can be a substantial effectmore » at larger scales, higher impact velocities, or both.« less

    15. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    16. Computational modeling of Krypton gas puffs with tailored mass density profiles on Z

      SciTech Connect (OSTI)

      Jennings, C. A.; Ampleford, D. J.; Lamppa, D. C.; Hansen, S. B.; Jones, B.; Harvey-Thompson, A. J.; Jobe, M.; Strizic, T.; Reneker, J.; Rochau, G. A.; Cuneo, M. E.

      2015-05-15

      Large diameter multi-shell gas puffs rapidly imploded by high current (?20 MA, ?100?ns) on the Z generator of Sandia National Laboratories are able to produce high-intensity Krypton K-shell emission at ?13?keV. Efficiently radiating at these high photon energies is a significant challenge which requires the careful design and optimization of the gas distribution. To facilitate this, we hydrodynamically model the gas flow out of the nozzle and then model its implosion using a 3-dimensional resistive, radiative MHD code (GORGON). This approach enables us to iterate between modeling the implosion and gas flow from the nozzle to optimize radiative output from this combined system. Guided by our implosion calculations, we have designed gas profiles that help mitigate disruption from Magneto-RayleighTaylor implosion instabilities, while preserving sufficient kinetic energy to thermalize to the high temperatures required for K-shell emission.

    17. Computational modeling of Krypton gas puffs with tailored mass density profiles on Z.

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Jennings, Christopher A.; Ampleford, David J.; Lamppa, Derek C.; Hansen, Stephanie B.; Jones, Brent Manley; Harvey-Thompson, Adam James; Jobe, Marc Ronald Lee; Reneker, Joseph; Rochau, Gregory A.; Cuneo, Michael Edward; et al

      2015-05-18

      Large diameter multi-shell gas puffs rapidly imploded by high current (~20 MA, ~100 ns) on the Z generator of Sandia National Laboratories are able to produce high-intensity Krypton K-shell emission at ~13 keV. Efficiently radiating at these high photon energies is a significant challenge which requires the careful design and optimization of the gas distribution. To facilitate this, we hydrodynamically model the gas flow out of the nozzle and then model its implosion using a 3-dimensional resistive, radiative MHD code (GORGON). This approach enables us to iterate between modeling the implosion and gas flow from the nozzle to optimize radiativemore » output from this combined system. Furthermore, guided by our implosion calculations, we have designed gas profiles that help mitigate disruption from Magneto-Rayleigh–Taylor implosion instabilities, while preserving sufficient kinetic energy to thermalize to the high temperatures required for K-shell emission.« less

    18. Computational modeling of Krypton gas puffs with tailored mass density profiles on Z.

      SciTech Connect (OSTI)

      Jennings, Christopher A.; Ampleford, David J.; Lamppa, Derek C.; Hansen, Stephanie B.; Jones, Brent Manley; Harvey-Thompson, Adam James; Jobe, Marc Ronald Lee; Reneker, Joseph; Rochau, Gregory A.; Cuneo, Michael Edward; Strizic, T.

      2015-05-18

      Large diameter multi-shell gas puffs rapidly imploded by high current (~20 MA, ~100 ns) on the Z generator of Sandia National Laboratories are able to produce high-intensity Krypton K-shell emission at ~13 keV. Efficiently radiating at these high photon energies is a significant challenge which requires the careful design and optimization of the gas distribution. To facilitate this, we hydrodynamically model the gas flow out of the nozzle and then model its implosion using a 3-dimensional resistive, radiative MHD code (GORGON). This approach enables us to iterate between modeling the implosion and gas flow from the nozzle to optimize radiative output from this combined system. Furthermore, guided by our implosion calculations, we have designed gas profiles that help mitigate disruption from Magneto-RayleighTaylor implosion instabilities, while preserving sufficient kinetic energy to thermalize to the high temperatures required for K-shell emission.

    19. Computational modeling of structure of metal matrix composite in centrifugal casting process

      SciTech Connect (OSTI)

      Zagorski, Roman [Department of Electrotechnology, Faculty of Materials Science and Metallurgy, Silesian University of Technology, ul. Krasinskiego 8, 40-019, Katowice (Poland)

      2007-04-07

      The structure of alumina matrix composite reinforced with crystalline particles obtained during centrifugal casting process are studied. Several parameters of cast process like pouring temperature, temperature, rotating speed and size of casting mould which influent on structure of composite are examined. Segregation of crystalline particles depended on other factors such as: the gradient of density of the liquid matrix and reinforcement, thermal processes connected with solidifying of the cast, processes leading to changes in physical and structural properties of liquid composite are also investigated. All simulation are carried out by CFD program Fluent. Numerical simulations are performed using the FLUENT two-phase free surface (air and matrix) unsteady flow model (volume of fluid model - VOF) and discrete phase model (DPM)

    20. DFT modeling of adsorption onto uranium metal using large-scale parallel computing

      SciTech Connect (OSTI)

      Davis, N.; Rizwan, U.

      2013-07-01

      There is a dearth of atomistic simulations involving the surface chemistry of 7-uranium which is of interest as the key fuel component of a breeder-burner stage in future fuel cycles. Recent availability of high-performance computing hardware and software has rendered extended quantum chemical surface simulations involving actinides feasible. With that motivation, data for bulk and surface 7-phase uranium metal are calculated in the plane-wave pseudopotential density functional theory method. Chemisorption of atomic hydrogen and oxygen on several un-relaxed low-index faces of 7-uranium is considered. The optimal adsorption sites (calculated cohesive energies) on the (100), (110), and (111) faces are found to be the one-coordinated top site (8.8 eV), four-coordinated center site (9.9 eV), and one-coordinated top 1 site (7.9 eV) respectively, for oxygen; and the four-coordinated center site (2.7 eV), four-coordinated center site (3.1 eV), and three-coordinated top2 site (3.2 eV) for hydrogen. (authors)

    1. Computational Nanophotonics: modeling optical interactions and transport in tailored nanosystem architectures

      SciTech Connect (OSTI)

      Schatz, George; Ratner, Mark

      2014-02-27

      This report describes research by George Schatz and Mark Ratner that was done over the period 10/03-5/09 at Northwestern University. This research project was part of a larger research project with the same title led by Stephen Gray at Argonne. A significant amount of our work involved collaborations with Gray, and there were many joint publications as summarized later. In addition, a lot of this work involved collaborations with experimental groups at Northwestern, Argonne, and elsewhere. The research was primarily concerned with developing theory and computational methods that can be used to describe the interaction of light with noble metal nanoparticles (especially silver) that are capable of plasmon excitation. Classical electrodynamics provides a powerful approach for performing these studies, so much of this research project involved the development of methods for solving Maxwell’s equations, including both linear and nonlinear effects, and examining a wide range of nanostructures, including particles, particle arrays, metal films, films with holes, and combinations of metal nanostructures with polymers and other dielectrics. In addition, our work broke new ground in the development of quantum mechanical methods to describe plasmonic effects based on the use of time dependent density functional theory, and we developed new theory concerned with the coupling of plasmons to electrical transport in molecular wire structures. Applications of our technology were aimed at the development of plasmonic devices as components of optoelectronic circuits, plasmons for spectroscopy applications, and plasmons for energy-related applications.

    2. COMPUTATIONAL AND EXPERIMENTAL MODELING OF THREE-PHASE SLURRY-BUBBLE COLUMN REACTOR

      SciTech Connect (OSTI)

      Isaac K. Gamwo; Dimitri Gidaspow

      1999-09-01

      Considerable progress has been achieved in understanding three-phase reactors from the point of view of kinetic theory. In a paper in press for publication in Chemical Engineering Science (Wu and Gidaspow, 1999) we have obtained a complete numerical solution of bubble column reactors. In view of the complexity of the simulation a better understanding of the processes using simplified analytical solutions is required. Such analytical solutions are presented in the attached paper, Large Scale Oscillations or Gravity Waves in Risers and Bubbling Beds. This paper presents analytical solutions for bubbling frequencies and standing wave flow patterns. The flow patterns in operating slurry bubble column reactors are not optimum. They involve upflow in the center and downflow at the walls. It may be possible to control flow patterns by proper redistribution of heat exchangers in slurry bubble column reactors. We also believe that the catalyst size in operating slurry bubble column reactors is not optimum. To obtain an optimum size we are following up on the observation of George Cody of Exxon who reported a maximum granular temperature (random particle kinetic energy) for a particle size of 90 microns. The attached paper, Turbulence of Particles in a CFB and Slurry Bubble Columns Using Kinetic Theory, supports George Cody's observations. However, our explanation for the existence of the maximum in granular temperature differs from that proposed by George Cody. Further computer simulations and experiments involving measurements of granular temperature are needed to obtain a sound theoretical explanation for the possible existence of an optimum catalyst size.

    3. Mathematical and computational modeling of the diffraction problems by discrete singularities method

      SciTech Connect (OSTI)

      Nesvit, K. V.

      2014-11-12

      The main objective of this study is reduced the boundary-value problems of scattering and diffraction waves on plane-parallel structures to the singular or hypersingular integral equations. For these cases we use a method of the parametric representations of the integral and pseudo-differential operators. Numerical results of the model scattering problems on periodic and boundary gratings and also on the gratings above a flat screen reflector are presented in this paper.

    4. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    5. Computational Intelligence Based Data Fusion Algorithm for Dynamic sEMG and Skeletal Muscle Force Modelling

      SciTech Connect (OSTI)

      Chandrasekhar Potluri,; Madhavi Anugolu; Marco P. Schoen; D. Subbaram Naidu

      2013-08-01

      In this work, an array of three surface Electrography (sEMG) sensors are used to acquired muscle extension and contraction signals for 18 healthy test subjects. The skeletal muscle force is estimated using the acquired sEMG signals and a Non-linear Wiener Hammerstein model, relating the two signals in a dynamic fashion. The model is obtained from using System Identification (SI) algorithm. The obtained force models for each sensor are fused using a proposed fuzzy logic concept with the intent to improve the force estimation accuracy and resilience to sensor failure or misalignment. For the fuzzy logic inference system, the sEMG entropy, the relative error, and the correlation of the force signals are considered for defining the membership functions. The proposed fusion algorithm yields an average of 92.49% correlation between the actual force and the overall estimated force output. In addition, the proposed fusionbased approach is implemented on a test platform. Experiments indicate an improvement in finger/hand force estimation.

    6. Determination of High-Frequency Current Distribution Using EMTP-Based Transmission Line Models with Resulting Radiated Electromagnetic Fields

      SciTech Connect (OSTI)

      Mork, B; Nelson, R; Kirkendall, B; Stenvig, N

      2009-11-30

      Application of BPL technologies to existing overhead high-voltage power lines would benefit greatly from improved simulation tools capable of predicting performance - such as the electromagnetic fields radiated from such lines. Existing EMTP-based frequency-dependent line models are attractive since their parameters are derived from physical design dimensions which are easily obtained. However, to calculate the radiated electromagnetic fields, detailed current distributions need to be determined. This paper presents a method of using EMTP line models to determine the current distribution on the lines, as well as a technique for using these current distributions to determine the radiated electromagnetic fields.

    7. Computation & Simulation > Theory & Computation > Research >...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...

    8. A Computational Model of the Mark-IV Electrorefiner: Phase I -- Fuel Basket/Salt Interface

      SciTech Connect (OSTI)

      Robert Hoover; Supathorn Phongikaroon; Shelly Li; Michael Simpson; Tae-Sic Yoo

      2009-09-01

      Spent driver fuel from the Experimental Breeder Reactor-II (EBR-II) is currently being treated in the Mk-IV electrorefiner (ER) in the Fuel Conditioning Facility (FCF) at Idaho National Laboratory. The modeling approach to be presented here has been developed to help understand the effect of different parameters on the dynamics of this system. The first phase of this new modeling approach focuses on the fuel basket/salt interface involving the transport of various species found in the driver fuels (e.g. uranium and zirconium). This approach minimizes the guessed parameters to only one, the exchange current density (i0). U3+ and Zr4+ were the only species used for the current study. The result reveals that most of the total cell current is used for the oxidation of uranium, with little being used by zirconium. The dimensionless approach shows that the total potential is a strong function of i0 and a weak function of wt% of uranium in the salt system for initiation processes.

    9. Mathematical and Computational Epidemiology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    10. Inference of tumor evolution during chemotherapy by computational modeling and in situ analysis of genetic and phenotypic cellular diversity

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Almendro, Vanessa; Cheng, Yu -Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muñoz, Montse; Russnes, Hege  G.; Helland, Åslaug; et al

      2014-02-01

      Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatialmore » distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.« less

    11. Inference of tumor evolution during chemotherapy by computational modeling and in situ analysis of genetic and phenotypic cellular diversity

      SciTech Connect (OSTI)

      Almendro, Vanessa; Cheng, Yu -Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muoz, Montse; Russnes, Hege G.; Helland, slaug; Rye, Inga H.; Borresen-Dale, Anne -Lise; Maruyama, Reo; vanOudenaarden, Alexander; Dowsett, Mitchell; Jones, Robin L.; Reis-Filho, Jorge; Gascon, Pere; Gnen, Mithat; Michor, Franziska; Polyak, Kornelia

      2014-02-01

      Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatial distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.

    12. Computational Study of Bond Dissociation Enthalpies for Substituted $\\beta$-O-4 Lignin Model Compounds

      SciTech Connect (OSTI)

      Younker, Jarod M; Beste, Ariana; Buchanan III, A C

      2011-01-01

      The biopolymer lignin is a potential source of valuable chemicals. Phenethyl phenyl ether (PPE) is representative of the dominant $\\beta$-O-4 ether linkage. Density functional theory (DFT) is used to calculate the Boltzmann-weighted carbon-oxygen and carbon-carbon bond dissociation enthalpies (BDEs) of substituted PPE. These values are important in order to understand lignin decomposition. Exclusion of all conformers that have distributions of less than 5\\% at 298 K impacts the BDE by less than 1 kcal mol$^{-1}$. We find that aliphatic hydroxyl/methylhydroxyl substituents introduce only small changes to the BDEs (0-3 kcal mol$^{-1}$). Substitution on the phenyl ring at the $ortho$ position substantially lowers the C-O BDE, except in combination with the hydroxyl/methylhydroxyl substituents, where the effect of methoxy substitution is reduced by hydrogen bonding. Hydrogen bonding between the aliphatic substituents and the ether oxygen in the PPE derivatives has a significant influence on the BDE. CCSD(T)-calculated BDEs and hydrogen bond strengths of $ortho$-substituted anisoles when compared with M06-2X values confirm that the latter method is sufficient to describe the molecules studied and provide an important benchmark for lignin model compounds.

    13. Risk and Vulnerability Assessment Using Cybernomic Computational Models: Tailored for Industrial Control Systems

      SciTech Connect (OSTI)

      Abercrombie, Robert K; Sheldon, Federick T.; Schlicher, Bob G

      2015-01-01

      There are many influencing economic factors to weigh from the defender-practitioner stakeholder point-of-view that involve cost combined with development/deployment models. Some examples include the cost of countermeasures themselves, the cost of training and the cost of maintenance. Meanwhile, we must better anticipate the total cost from a compromise. The return on investment in countermeasures is essentially impact costs (i.e., the costs from violating availability, integrity and confidentiality / privacy requirements). The natural question arises about choosing the main risks that must be mitigated/controlled and monitored in deciding where to focus security investments. To answer this question, we have investigated the cost/benefits to the attacker/defender to better estimate risk exposure. In doing so, it s important to develop a sound basis for estimating the factors that derive risk exposure, such as likelihood that a threat will emerge and whether it will be thwarted. This impact assessment framework can provide key information for ranking cybersecurity threats and managing risk.

    14. TriBITS lifecycle model. Version 1.0, a lean/agile software lifecycle model for research-based computational science and engineering and applied mathematical software.

      SciTech Connect (OSTI)

      Willenbring, James M.; Bartlett, Roscoe Ainsworth; Heroux, Michael Allen

      2012-01-01

      Software lifecycles are becoming an increasingly important issue for computational science and engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process - respecting the competing needs of research vs. production - cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects that are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Here, we advocate three to four phases or maturity levels that address the appropriate handling of many issues associated with the transition from research to production software. The goals of this lifecycle model are to better communicate maturity levels with customers and to help to identify and promote Software Engineering (SE) practices that will help to improve productivity and produce better software. An important collection of software in this domain is Trilinos, which is used as the motivation and the initial target for this lifecycle model. However, many other related and similar CSE (and non-CSE) software projects can also make good use of this lifecycle model, especially those that use the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.

    15. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    16. Electromagnetic model for near-field microwave microscope with atomic resolution: Determination of tunnel junction impedance

      SciTech Connect (OSTI)

      Reznik, Alexander N.

      2014-08-25

      An electrodynamic model is proposed for the tunneling microwave microscope with subnanometer space resolution as developed by Lee et al. [Appl. Phys. Lett. 97, 183111 (2010)]. Tip-sample impedance Z{sub a} was introduced and studied in the tunneling and non-tunneling regimes. At tunneling breakdown, the microwave current between probe and sample flows along two parallel channels characterized by impedances Z{sub p} and Z{sub t} that add up to form overall impedance Z{sub a}. Quantity Z{sub p} is the capacitive impedance determined by the near field of the probe and Z{sub t} is the impedance of the tunnel junction. By taking into account the distance dependences of effective tip radius r{sub 0}(z) and tunnel resistance R{sub t}(z)?=?Re[Z{sub t}(z)], we were able to explain the experimentally observed dependences of resonance frequency f{sub r}(z) and quality factor Q{sub L}(z) of the microscope. The obtained microwave resistance R{sub t}(z) and direct current tunnel resistance R{sub t}{sup dc}(z) exhibit qualitatively similar behavior, although being largely different in both magnitude and the characteristic scale of height dependence. Interpretation of the microwave images of the atomic structure of test samples proved possible by taking into account the inductive component of tunnel impedance ImZ{sub t}?=??L{sub t}. Relation ?L{sub t}/R{sub t}???0.235 was obtained.

    17. Magnetic resonance imaging and computational fluid dynamics (CFD) simulations of rabbit nasal airflows for the development of hybrid CFD/PBPK models

      SciTech Connect (OSTI)

      Corley, Richard A.; Minard, Kevin R.; Kabilan, Senthil; Einstein, Daniel R.; Kuprat, Andrew P.; harkema, J. R.; Kimbell, Julia; Gargas, M. L.; Kinzell, John H.

      2009-06-01

      The percentages of total air?ows over the nasal respiratory and olfactory epithelium of female rabbits were cal-culated from computational ?uid dynamics (CFD) simulations of steady-state inhalation. These air?ow calcula-tions, along with nasal airway geometry determinations, are critical parameters for hybrid CFD/physiologically based pharmacokinetic models that describe the nasal dosimetry of water-soluble or reactive gases and vapors in rabbits. CFD simulations were based upon three-dimensional computational meshes derived from magnetic resonance images of three adult female New Zealand White (NZW) rabbits. In the anterior portion of the nose, the maxillary turbinates of rabbits are considerably more complex than comparable regions in rats, mice, mon-keys, or humans. This leads to a greater surface area to volume ratio in this region and thus the potential for increased extraction of water soluble or reactive gases and vapors in the anterior portion of the nose compared to many other species. Although there was considerable interanimal variability in the ?ne structures of the nasal turbinates and air?ows in the anterior portions of the nose, there was remarkable consistency between rabbits in the percentage of total inspired air?ows that reached the ethmoid turbinate region (~50%) that is presumably lined with olfactory epithelium. These latter results (air?ows reaching the ethmoid turbinate region) were higher than previous published estimates for the male F344 rat (19%) and human (7%). These di?erences in regional air?ows can have signi?cant implications in interspecies extrapolations of nasal dosimetry.

    18. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

      SciTech Connect (OSTI)

      Zhou, Shujia; Duffy, Daniel; Clune, Thomas; Suarez, Max; Williams, Samuel; Halem, Milton

      2009-01-10

      The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratio of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.

    19. Computational fluid dynamics modeling of two-phase flow in a BWR fuel assembly. Final CRADA Report.

      SciTech Connect (OSTI)

      Tentner, A.; Nuclear Engineering Division

      2009-10-13

      A direct numerical simulation capability for two-phase flows with heat transfer in complex geometries can considerably reduce the hardware development cycle, facilitate the optimization and reduce the costs of testing of various industrial facilities, such as nuclear power plants, steam generators, steam condensers, liquid cooling systems, heat exchangers, distillers, and boilers. Specifically, the phenomena occurring in a two-phase coolant flow in a BWR (Boiling Water Reactor) fuel assembly include coolant phase changes and multiple flow regimes which directly influence the coolant interaction with fuel assembly and, ultimately, the reactor performance. Traditionally, the best analysis tools for this purpose of two-phase flow phenomena inside the BWR fuel assembly have been the sub-channel codes. However, the resolution of these codes is too coarse for analyzing the detailed intra-assembly flow patterns, such as flow around a spacer element. Advanced CFD (Computational Fluid Dynamics) codes provide a potential for detailed 3D simulations of coolant flow inside a fuel assembly, including flow around a spacer element using more fundamental physical models of flow regimes and phase interactions than sub-channel codes. Such models can extend the code applicability to a wider range of situations, which is highly important for increasing the efficiency and to prevent accidents.

    20. Computational fluid dynamics modeling of chemical looping combustion process with calcium sulphate oxygen carrier - article no. A19

      SciTech Connect (OSTI)

      Baosheng Jin; Rui Xiao; Zhongyi Deng; Qilei Song

      2009-07-01

      To concentrate CO{sub 2} in combustion processes by efficient and energy-saving ways is a first and very important step for its sequestration. Chemical looping combustion (CLC) could easily achieve this goal. A chemical-looping combustion system consists of a fuel reactor and an air reactor. Two reactors in the form of interconnected fluidized beds are used in the process: (1) a fuel reactor where the oxygen carrier is reduced by reaction with the fuel, and (2) an air reactor where the reduced oxygen carrier from the fuel reactor is oxidized with air. The outlet gas from the fuel reactor consists of CO{sub 2} and H{sub 2}O, while the outlet gas stream from the air reactor contains only N{sub 2} and some unused O{sub 2}. The water in combustion products can be easily removed by condensation and pure carbon dioxide is obtained without any loss of energy for separation. Until now, there is little literature about mathematical modeling of chemical-looping combustion using the computational fluid dynamics (CFD) approach. In this work, the reaction kinetic model of the fuel reactor (CaSO{sub 4}+ H{sub 2}) is developed by means of the commercial code FLUENT and the effects of partial pressure of H{sub 2} (concentration of H{sub 2}) on chemical looping combustion performance are also studied. The results show that the concentration of H{sub 2} could enhance the CLC performance.

    1. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Widespread Hydrogen Fueling Infrastructure Is the Goal of H2FIRST Project Capabilities, Center for Infrastructure Research and Innovation (CIRI), Computational Modeling & Simulation, Energy, Energy Storage, Energy Storage Systems, Facilities, Infrastructure Security, Materials Science, Modeling, Modeling & Analysis, News, News & Events, Partnership, Research & Capabilities, Systems Analysis, Systems Engineering, Transportation Energy Widespread Hydrogen Fueling Infrastructure Is

    2. Study of behavior and determination of customer lifetime value(CLV) using Markov chain model

      SciTech Connect (OSTI)

      Permana, Dony; Indratno, Sapto Wahyu; Pasaribu, Udjianna S.

      2014-03-24

      Customer Lifetime Value or CLV is a restriction on interactive marketing to help a company in arranging financial for the marketing of new customer acquisition and customer retention. Additionally CLV can be able to segment customers for financial arrangements. Stochastic models for the fairly new CLV used a Markov chain. In this model customer retention probability and new customer acquisition probability play an important role. This model is originally introduced by Pfeifer and Carraway in 2000 [1]. They introduced several CLV models, one of them only involves customer and former customer. In this paper we expand the model by adding the assumption of the transition from former customer to customer. In the proposed model, the CLV value is higher than the CLV value obtained by Pfeifer and Caraway model. But our model still requires a longer convergence time.

    3. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

      SciTech Connect (OSTI)

      Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

      2013-11-15

      Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions.Conclusions: The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

    4. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    5. Implementation and Evaluation of the Virtual Fields Method: Determining Constitutive Model Parameters From Full-Field Deformation Data.

      SciTech Connect (OSTI)

      Kramer, Sharlotte Lorraine Bolyard; Scherzinger, William M.

      2014-09-01

      The Virtual Fields Method (VFM) is an inverse method for constitutive model parameter identication that relies on full-eld experimental measurements of displacements. VFM is an alternative to standard approaches that require several experiments of simple geometries to calibrate a constitutive model. VFM is one of several techniques that use full-eld exper- imental data, including Finite Element Method Updating (FEMU) techniques, but VFM is computationally fast, not requiring iterative FEM analyses. This report describes the im- plementation and evaluation of VFM primarily for nite-deformation plasticity constitutive models. VFM was successfully implemented in MATLAB and evaluated using simulated FEM data that included representative experimental noise found in the Digital Image Cor- relation (DIC) optical technique that provides full-eld displacement measurements. VFM was able to identify constitutive model parameters for the BCJ plasticity model even in the presence of simulated DIC noise, demonstrating VFM as a viable alternative inverse method. Further research is required before VFM can be adopted as a standard method for constitu- tive model parameter identication, but this study is a foundation for ongoing research at Sandia for improving constitutive model calibration.

    6. Atomic-Scale Design of Iron Fischer-Tropsch Catalysts; A Combined Computational Chemistry, Experimental, and Microkinetic Modeling Approach

      SciTech Connect (OSTI)

      Manos Mavrikakis; James Dumesic; Rahul Nabar; Calvin Bartholonew; Hu Zou; Uchenna Paul

      2008-09-29

      This work focuses on (1) searching/summarizing published Fischer-Tropsch synthesis (FTS) mechanistic and kinetic studies of FTS reactions on iron catalysts; (2) preparation and characterization of unsupported iron catalysts with/without potassium/platinum promoters; (3) measurement of H{sub 2} and CO adsorption/dissociation kinetics on iron catalysts using transient methods; (3) analysis of the transient rate data to calculate kinetic parameters of early elementary steps in FTS; (4) construction of a microkinetic model of FTS on iron, and (5) validation of the model from collection of steady-state rate data for FTS on iron catalysts. Three unsupported iron catalysts and three alumina-supported iron catalysts were prepared by non-aqueous-evaporative deposition (NED) or aqueous impregnation (AI) and characterized by chemisorption, BET, temperature-programmed reduction (TPR), extent-of-reduction, XRD, and TEM methods. These catalysts, covering a wide range of dispersions and metal loadings, are well-reduced and relatively thermally stable up to 500-600 C in H{sub 2} and thus ideal for kinetic and mechanistic studies. Kinetic parameters for CO adsorption, CO dissociation, and surface carbon hydrogenation on these catalysts were determined from temperature-programmed desorption (TPD) of CO and temperature programmed surface hydrogenation (TPSR), temperature-programmed hydrogenation (TPH), and isothermal, transient hydrogenation (ITH). A microkinetic model was constructed for the early steps in FTS on polycrystalline iron from the kinetic parameters of elementary steps determined experimentally in this work and from literature values. Steady-state rate data were collected in a Berty reactor and used for validation of the microkinetic model. These rate data were fitted to 'smart' Langmuir-Hinshelwood rate expressions derived from a sequence of elementary steps and using a combination of fitted steady-state parameters and parameters specified from the transient measurements. The results provide a platform for further development of microkinetic models of FTS on Fe and a basis for more precise modeling of FTS activity of Fe catalysts. Calculations using periodic, self-consistent Density Functional Theory (DFT) methods were performed on various realistic models of industrial, Fe-based FTS catalysts. Close-packed, most stable Fe(110) facet was analyzed and subsequently carbide formation was found to be facile leading to the choice of the FeC(110) model representing a Fe facet with a sub-surface C atom. The Pt adatom (Fe{sup Pt}(110)) was found to be the most stable model for our studies into Pt promotion and finally the role of steps was elucidated by recourse to the defected Fe(211) facet. Binding Energies(BEs), preferred adsorption sites and geometries for all FTS relevant stable species and intermediates were evaluated on each model catalyst facet. A mechanistic model (comprising of 32 elementary steps involving 19 species) was constructed and each elementary step therein was fully characterized with respect to its thermochemistry and kinetics. Kinetic calculations involved evaluation of the Minimum Energy Pathways (MEPs) and activation energies (barriers) for each step. Vibrational frequencies were evaluated for the preferred adsorption configuration of each species with the aim of evaluating entropy-changes, pre exponential factors and serving as a useful connection with experimental surface science techniques. Comparative analysis among these four facets revealed important trends in their relative behavior and roles in FTS catalysis. Overall the First Principles Calculations afforded us a new insight into FTS catalysis on Fe and modified-Fe catalysts.

    7. Parallel computing works

      SciTech Connect (OSTI)

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    8. Computational Modeling & Simulation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear Energy Nuclear

    9. Computational Modeling & Simulation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      3 - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear Energy Nuclear

    10. Computational Modeling & Simulation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear Energy Nuclear

    11. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and ...

    12. KINETIC MODELING OF A FISCHER-TROPSCH REACTION OVER A COBALT CATALYST IN A SLURRY BUBBLE COLUMN REACTOR FOR INCORPORATION INTO A COMPUTATIONAL MULTIPHASE FLUID DYNAMICS MODEL

      SciTech Connect (OSTI)

      Anastasia Gribik; Doona Guillen, PhD; Daniel Ginosar, PhD

      2008-09-01

      Currently multi-tubular fixed bed reactors, fluidized bed reactors, and slurry bubble column reactors (SBCRs) are used in commercial Fischer Tropsch (FT) synthesis. There are a number of advantages of the SBCR compared to fixed and fluidized bed reactors. The main advantage of the SBCR is that temperature control and heat recovery are more easily achieved. The SBCR is a multiphase chemical reactor where a synthesis gas, comprised mainly of H2 and CO, is bubbled through a liquid hydrocarbon wax containing solid catalyst particles to produce specialty chemicals, lubricants, or fuels. The FT synthesis reaction is the polymerization of methylene groups [-(CH2)-] forming mainly linear alkanes and alkenes, ranging from methane to high molecular weight waxes. The Idaho National Laboratory is developing a computational multiphase fluid dynamics (CMFD) model of the FT process in a SBCR. This paper discusses the incorporation of absorption and reaction kinetics into the current hydrodynamic model. A phased approach for incorporation of the reaction kinetics into a CMFD model is presented here. Initially, a simple kinetic model is coupled to the hydrodynamic model, with increasing levels of complexity added in stages. The first phase of the model includes incorporation of the absorption of gas species from both large and small bubbles into the bulk liquid phase. The driving force for the gas across the gas liquid interface into the bulk liquid is dependent upon the interfacial gas concentration in both small and large bubbles. However, because it is difficult to measure the concentration at the gas-liquid interface, coefficients for convective mass transfer have been developed for the overall driving force between the bulk concentrations in the gas and liquid phases. It is assumed that there are no temperature effects from mass transfer of the gas phases to the bulk liquid phase, since there are only small amounts of dissolved gas in the liquid phase. The product from the incorporation of absorption is the steady state concentration profile of the absorbed gas species in the bulk liquid phase. The second phase of the model incorporates a simplified macrokinetic model to the mass balance equation in the CMFD code. Initially, the model assumes that the catalyst particles are sufficiently small such that external and internal mass and heat transfer are not rate limiting. The model is developed utilizing the macrokinetic rate expression developed by Yates and Satterfield (1991). Initially, the model assumes that the only species formed other than water in the FT reaction is C27H56. Change in moles of the reacting species and the resulting temperature of the catalyst and fluid phases is solved simultaneously. The macrokinetic model is solved in conjunction with the species transport equations in a separate module which is incorporated into the CMFD code.

    13. Method of and apparatus for determining the similarity of a biological analyte from a model constructed from known biological fluids

      DOE Patents [OSTI]

      Robinson, Mark R.; Ward, Kenneth J.; Eaton, Robert P.; Haaland, David M.

      1990-01-01

      The characteristics of a biological fluid sample having an analyte are determined from a model constructed from plural known biological fluid samples. The model is a function of the concentration of materials in the known fluid samples as a function of absorption of wideband infrared energy. The wideband infrared energy is coupled to the analyte containing sample so there is differential absorption of the infrared energy as a function of the wavelength of the wideband infrared energy incident on the analyte containing sample. The differential absorption causes intensity variations of the infrared energy incident on the analyte containing sample as a function of sample wavelength of the energy, and concentration of the unknown analyte is determined from the thus-derived intensity variations of the infrared energy as a function of wavelength from the model absorption versus wavelength function.

    14. Computational Tools for Accelerating Carbon Capture Process Development

      SciTech Connect (OSTI)

      Miller, David; Sahinidis, N.V,; Cozad, A; Lee, A; Kim, H; Morinelly, J.; Eslick, J.; Yuan, Z.

      2013-06-04

      This presentation reports development of advanced computational tools to accelerate next generation technology development. These tools are to develop an optimized process using rigorous models. They include: Process Models; Simulation-Based Optimization; Optimized Process; Uncertainty Quantification; Algebraic Surrogate Models; and Superstructure Optimization (Determine Configuration).

    15. Computational Analysis of the Pyrolysis of ..beta..-O4 Lignin Model Compounds: Concerted vs. Homolytic Fragmentation

      SciTech Connect (OSTI)

      Clark, J. M.; Robichaud, D. J.; Nimlos, M. R.

      2012-01-01

      The thermochemical conversion of biomass to liquid transportation fuels is a very attractive technology for expanding the utilization of carbon neutral processes and reducing dependency on fossil fuel resources. As with all such emerging technologies, biomass conversion through gasification or pyrolysis has a number of obstacles that need to be overcome to make these processes cost competitive with the refining of fossil fuels. Our current efforts have focused on the investigation of the thermochemistry of the linkages between lignin units using ab initio calculations on dimeric lignin model compounds. All calculations were carried out using M062X density functional theory at the 6-311++G(d,p) basis set. The M062X method has been shown to be consistent with the CBS-QB3 method while being significantly less computationally expensive. To date we have only completed the study on the b-O4 compounds. The theoretical calculations performed in the study indicate that concerted elimination pathways dominate over bond homolysis reactions under typical pyrolysis conditions. However, this does not mean that concerted elimination will be the dominant loss process for lignin. Bimolecular radical chemistry could very well dwarf the unimolecular pathways investigated in this study. These concerted pathways tend to form stable, reasonably non-reactive products that would be more suited producing a fungible bio-oil for the production of liquid transportation fuels.

    16. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

      SciTech Connect (OSTI)

      Katya Le Blanc; Johanna Oxstrand

      2012-04-01

      The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

    17. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

      SciTech Connect (OSTI)

      Saffer, Shelley I.

      2014-12-01

      This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

    18. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    19. Technosocial Modeling for Determining the Status and Nature of a States Nuclear Activities

      SciTech Connect (OSTI)

      Gastelum, Zoe N.; Harvey, Julia B.

      2009-09-25

      The International Atomic Energy Agency State Evaluation Process: The Role of Information Analysis in Reaching Safeguards Conclusions (Mathews et al. 2008), several examples of nonproliferation models using analytical software were developed that may assist the IAEA with collecting, visualizing, analyzing, and reporting information in support of the State Evaluation Process. This paper focuses on one of the examples a set of models developed in the Proactive Scenario Production, Evidence Collection, and Testing (ProSPECT) software that evaluates the status and nature of a states nuclear activities. The models use three distinct subject areas to perform this assessment: the presence of nuclear activities, the consistency of those nuclear activities with national nuclear energy goals, and the geopolitical context in which those nuclear activities are taking place. As a proof-of-concept for the models, a crude case study was performed. The study, which attempted to evaluate the nuclear activities taking place in Syria prior to September 2007, yielded illustrative, yet inconclusive, results. Due to the inconclusive nature of the case study results, changes that may improve the models efficiency and accuracy are proposed.

    20. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

      SciTech Connect (OSTI)

      Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

      2011-02-01

      Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

    1. Compositional modeling in porous media using constant volume flash and flux computation without the need for phase identification

      SciTech Connect (OSTI)

      Polvka, Ond?ej Mikyka, Ji?

      2014-09-01

      The paper deals with the numerical solution of a compositional model describing compressible two-phase flow of a mixture composed of several components in porous media with species transfer between the phases. The mathematical model is formulated by means of the extended Darcy's laws for all phases, components continuity equations, constitutive relations, and appropriate initial and boundary conditions. The splitting of components among the phases is described using a new formulation of the local thermodynamic equilibrium which uses volume, temperature, and moles as specification variables. The problem is solved numerically using a combination of the mixed-hybrid finite element method for the total flux discretization and the finite volume method for the discretization of transport equations. A new approach to numerical flux approximation is proposed, which does not require the phase identification and determination of correspondence between the phases on adjacent elements. The time discretization is carried out by the backward Euler method. The resulting large system of nonlinear algebraic equations is solved by the NewtonRaphson iterative method. We provide eight examples of different complexity to show reliability and robustness of our approach.

    2. Atomic-Scale Design of Iron Fischer-Tropsch Catalysts: A Combined Computational Chemistry, Experimental, and Microkinetic Modeling Approach

      SciTech Connect (OSTI)

      Manos Mavrikakis; James A. Dumesic; Rahul P. Nabar

      2006-09-29

      Work continued on the development of a microkinetic model of Fischer-Tropsch synthesis (FTS) on supported and unsupported Fe catalysts. The following aspects of the FT mechanism on unsupported iron catalysts were investigated on during this third year: (1) the collection of rate data in a Berty CSTR reactor based on sequential design of experiments; (2) CO adsorption and CO-TPD for obtaining the heat of adsorption of CO on polycrystalline iron; and (3) isothermal hydrogenation (IH) after Fischer Tropsch reaction to identify and quantify surface carbonaceous species. Rates of C{sub 2+} formation on unsupported iron catalysts at 220 C and 20 atm correlated well to a Langmuir-Hinshelwood type expression, derived assuming carbon hydrogenation to CH and OH recombination to water to be rate-determining steps. From desorption of molecularly adsorbed CO at different temperatures the heat of adsorption of CO on polycrystalline iron was determined to be 100 kJ/mol. Amounts and types of carbonaceous species formed after FT reaction for 5-10 minutes at 150, 175, 200 and 285 C vary significantly with temperature. Mr. Brian Critchfield completed his M.S. thesis work on a statistically designed study of the kinetics of FTS on 20% Fe/alumina. Preparation of a paper describing this work is in progress. Results of these studies were reported at the Annual Meeting of the Western States Catalysis and at the San Francisco AIChE meeting. In the coming period, studies will focus on quantitative determination of the rates of kinetically-relevant elementary steps on unsupported Fe catalysts with/without K and Pt promoters by SSITKA method. This study will help us to (1) understand effects of promoter and support on elementary kinetic parameters and (2) build a microkinetics model for FTS on iron. Calculations using periodic, self-consistent Density Functional Theory (DFT) methods were performed on models of defected Fe surfaces, most significantly the stepped Fe(211) surface. Binding Energies (BE's), preferred adsorption sites and geometries of all the FTS relevant stable species and intermediates were evaluated. Each elementary step of our reaction model was fully characterized with respect to its thermochemistry and comparisons between the stepped Fe(211) facet and the most-stable Fe(110) facet were established. In most cases the BE's on Fe(211) reflected the trends observed earlier on Fe(110), yet there were significant variations imposed on the underlying trends. Vibrational frequencies were evaluated for the preferred adsorption configurations of each species with the aim of evaluating the entropy-changes and preexponential factors for each elementary step. Kinetic studies were performed for the early steps of FTS (up to CH{sub 4} formation) and CO dissociation. This involved evaluation of the Minimum Energy Pathway (MEP) and activation energy barrier for the steps involved. We concluded that Fe(211) would allow for far more facile CO dissociation in comparison to other Fe catalysts studied so far, but the other FTS steps studied remained mostly unchanged.

    3. Computational Fluid Dynamics Modeling of the Bonneville Project: Tailrace Spill Patterns for Low Flows and Corner Collector Smolt Egress

      SciTech Connect (OSTI)

      Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.; Perkins, William A.

      2010-12-01

      In 2003, an extension of the existing ice and trash sluiceway was added at Bonneville Powerhouse 2 (B2). This extension started at the existing corner collector for the ice and trash sluiceway adjacent to Bonneville Powerhouse 2 and the new sluiceway was extended to the downstream end of Cascade Island. The sluiceway was designed to improve juvenile salmon survival by bypassing turbine passage at B2, and placing these smolt in downstream flowing water minimizing their exposure to fish and avian predators. In this study, a previously developed computational fluid dynamics model was modified and used to characterized tailrace hydraulics and sluiceway egress conditions for low total river flows and low levels of spillway flow. STAR-CD v4.10 was used for seven scenarios of low total river flow and low spill discharges. The simulation results were specifically examined to look at tailrace hydraulics at 5 ft below the tailwater elevation, and streamlines used to compare streamline pathways for streamlines originating in the corner collector outfall and adjacent to the outfall. These streamlines indicated that for all higher spill percentage cases (25% and greater) that streamlines from the corner collector did not approach the shoreline at the downstream end of Bradford Island. For the cases with much larger spill percentages, the streamlines from the corner collector were mid-channel or closer to the Washington shore as they moved downstream. Although at 25% spill at 75 kcfs total river, the total spill volume was sufficient to "cushion" the flow from the corner collector from the Bradford Island shore, areas of recirculation were modeled in the spillway tailrace. However, at the lowest flows and spill percentages, the streamlines from the B2 corner collector pass very close to the Bradford Island shore. In addition, the very flow velocity flows and large areas of recirculation greatly increase potential predator exposure of the spillway passed smolt. If there is concern for egress issues for smolt passing through the spillway, the spill pattern and volume need to be revisited.

    4. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2016-02-01 08:07:08

    5. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop....

    6. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    7. Determination of Electrochemical Performance and Thermo-Mechanical-Chemical Stability of SOFCs from Defect Modeling

      SciTech Connect (OSTI)

      Eric Wachsman; Keith L. Duncan

      2006-09-30

      This research was focused on two distinct but related issues. The first issue concerned using defect modeling to understand the relationship between point defect concentration and the electrochemical, thermo-chemical and mechano-chemical properties of typical solid oxide fuel cell (SOFC) materials. The second concerned developing relationships between the microstructural features of SOFC materials and their electrochemical performance. To understand the role point defects play in ceramics, a coherent analytical framework was used to develop expressions for the dependence of thermal expansion and elastic modulus on point defect concentration in ceramics. These models, collectively termed the continuum-level electrochemical model (CLEM), were validated through fits to experimental data from electrical conductivity, I-V characteristics, elastic modulus and thermo-chemical expansion experiments for (nominally pure) ceria, gadolinia-doped ceria (GDC) and yttria-stabilized zirconia (YSZ) with consistently good fits. The same values for the material constants were used in all of the fits, further validating our approach. As predicted by the continuum-level electrochemical model, the results reveal that the concentration of defects has a significant effect on the physical properties of ceramic materials and related devices. Specifically, for pure ceria and GDC, the elastic modulus decreased while the chemical expansion increased considerably in low partial pressures of oxygen. Conversely, the physical properties of YSZ remained insensitive to changes in oxygen partial pressure within the studied range. Again, the findings concurred exactly with the predictions of our analytical model. Indeed, further analysis of the results suggests that an increase in the point defect content weakens the attractive forces between atoms in fluorite-structured oxides. The reduction treatment effects on the flexural strength and the fracture toughness of pure ceria were also evaluated at room temperature. The results reveal that the flexural strength decreases significantly after heat treatment in very low oxygen partial pressure environments; however, in contrast, fracture toughness is increased by 30-40% when the oxygen partial pressure was decreased to 10{sup -20} to 10{sup -22} atm range. Fractographic studies show that microcracks developed at 800 oC upon hydrogen reduction are responsible for the decreased strength. To understand the role of microstructure on electrochemical performance, electrical impedance spectra from symmetric LSM/YSZ/LSM cells was de-convoluted to obtain the key electrochemical components of electrode performance, namely charge transfer resistance, surface diffusion of reactive species and bulk gas diffusion through the electrode pores. These properties were then related to microstructural features, such as triple-phase boundary length and tortuosity. From these experiments we found that the impedance due to oxygen adsorption obeys a power law with pore surface area, while the impedance due to charge transfer is found to obey a power-law with respect to triple phase boundary length. A model based on kinetic theory explaining the power-law relationships observed was then developed. Finally, during our EIS work on the symmetric LSM/YSZ/LSM cells a technique was developed to improve the quality of high-frequency impedance data and their subsequent de-convolution.

    8. Symmetric structure of field algebra of G-spin models determined by a normal subgroup

      SciTech Connect (OSTI)

      Xin, Qiaoling Jiang, Lining

      2014-09-15

      Let G be a finite group and H a normal subgroup. D(H; G) is the crossed product of C(H) and CG which is only a subalgebra of D(G), the double algebra of G. One can construct a C*-subalgebra F{sub H} of the field algebra F of G-spin models, so that F{sub H} is a D(H; G)-module algebra, whereas F is not. Then the observable algebra A{sub (H,G)} is obtained as the D(H; G)-invariant subalgebra of F{sub H}, and there exists a unique C*-representation of D(H; G) such that D(H; G) and A{sub (H,G)} are commutants with each other.

    9. Radiological Modeling for Determination of Derived Concentration Levels of an Area with Uranium Residual Material - 13533

      SciTech Connect (OSTI)

      Perez-Sanchez, Danyl [CIEMAT, Avenida Complutense 40, 28040, Madrid (Spain)] [CIEMAT, Avenida Complutense 40, 28040, Madrid (Spain)

      2013-07-01

      As a result of a pilot project developed at the old Spanish 'Junta de Energia Nuclear' to extract uranium from ores, tailings materials were generated. Most of these residual materials were sent back to different uranium mines, but a small amount of it was mixed with conventional building materials and deposited near the old plant until the surrounding ground was flattened. The affected land is included in an area under institutional control and used as recreational area. At the time of processing, uranium isotopes were separated but other radionuclides of the uranium decay series as Th-230, Ra-226 and daughters remain in the residue. Recently, the analyses of samples taken at different ground's depths confirmed their presence. This paper presents the methodology used to calculate the derived concentration level to ensure that the reference dose level of 0.1 mSv y-1 used as radiological criteria. In this study, a radiological impact assessment was performed modeling the area as recreational scenario. The modelization study was carried out with the code RESRAD considering as exposure pathways, external irradiation, inadvertent ingestion of soil, inhalation of resuspended particles, and inhalation of radon (Rn-222). As result was concluded that, if the concentration of Ra-226 in the first 15 cm of soil is lower than, 0.34 Bq g{sup -1}, the dose would not exceed the reference dose. Applying this value as a derived concentration level and comparing with the results of measurements on the ground, some areas with a concentration of activity slightly higher than latter were found. In these zones the remediation proposal has been to cover with a layer of 15 cm of clean material. This action represents a reduction of 85% of the dose and ensures compliance with the reference dose. (authors)

    10. RECONCILING MODELS OF LUMINOUS BLAZARS WITH MAGNETIC FLUXES DETERMINED BY RADIO CORE-SHIFT MEASUREMENTS

      SciTech Connect (OSTI)

      Nalewajko, Krzysztof; Begelman, Mitchell C.; Sikora, Marek

      2014-11-20

      Estimates of magnetic field strength in relativistic jets of active galactic nuclei, obtained by measuring the frequency-dependent radio core location, imply that the total magnetic fluxes in those jets are consistent with the predictions of the magnetically arrested disk (MAD) scenario of jet formation. On the other hand, the magnetic field strength determines the luminosity of the synchrotron radiation, which forms the low-energy bump of the observed blazar spectral energy distribution (SED). The SEDs of the most powerful blazars are strongly dominated by the high-energy bump, which is most likely due to the external radiation Compton mechanism. This high Compton dominance may be difficult to reconcile with the MAD scenario, unless (1) the geometry of external radiation sources (broad-line region, hot-dust torus) is quasi-spherical rather than flat, or (2) most gamma-ray radiation is produced in jet regions of low magnetization, e.g., in magnetic reconnection layers or in fast jet spines.

    11. Development and Verification of a Computational Fluid Dynamics Model of a Horizontal-Axis Tidal Current Turbine

      SciTech Connect (OSTI)

      Lawson, M. J.; Li, Y.; Sale, D. C.

      2011-10-01

      This paper describes the development of a computational fluid dynamics (CFD) methodology to simulate the hydrodynamics of horizontal-axis tidal current turbines. Qualitative measures of the CFD solutions were independent of the grid resolution. Conversely, quantitative comparisons of the results indicated that the use of coarse computational grids results in an under prediction of the hydrodynamic forces on the turbine blade in comparison to the forces predicted using more resolved grids. For the turbine operating conditions considered in this study, the effect of the computational timestep on the CFD solution was found to be minimal, and the results from steady and transient simulations were in good agreement. Additionally, the CFD results were compared to corresponding blade element momentum method calculations and reasonable agreement was shown. Nevertheless, we expect that for other turbine operating conditions, where the flow over the blade is separated, transient simulations will be required.

    12. Seizure control with thermal energy? Modeling of heat diffusivity in brain tissue and computer-based design of a prototype mini-cooler.

      SciTech Connect (OSTI)

      Osario, I.; Chang, F.-C.; Gopalsami, N.; Nuclear Engineering Division; Univ. of Kansas

      2009-10-01

      Automated seizure blockage is a top priority in epileptology. Lowering nervous tissue temperature below a certain level suppresses abnormal neuronal activity, an approach with certain advantages over electrical stimulation, the preferred investigational therapy for pharmacoresistant seizures. A computer model was developed to identify an efficient probe design and parameters that would allow cooling of brain tissue by no less than 21 C in 30 s, maximum. The Pennes equation and the computer code ABAQUS were used to investigate the spatiotemporal behavior of heat diffusivity in brain tissue. Arrays of distributed probes deliver sufficient thermal energy to decrease, inhomogeneously, brain tissue temperature from 37 to 20 C in 30 s and from 37 to 15 C in 60 s. Tissue disruption/loss caused by insertion of this probe is considerably less than that caused by ablative surgery. This model may be applied for the design and development of cooling devices for seizure control.

    13. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

      SciTech Connect (OSTI)

      Zhang, Jing Ghate, Sujata V.; Yoon, Sora C.; Lo, Joseph Y.; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

      2014-09-15

      Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different from 0.5 (p < 0.0001). For the 7 residents only, the AUC performance of the models was 0.590 (95% CI,0.537-0.642) and was also significantly higher than 0.5 (p = 0.0009). Therefore, generally the authors’ models were able to predict which masses were detected and which were missed better than chance. Conclusions: The authors proposed an algorithm that was able to predict which masses will be detected and which will be missed by each individual trainee. This confirms existence of error-making patterns in the detection of masses among radiology trainees. Furthermore, the proposed methodology will allow for the optimized selection of difficult cases for the trainees in an automatic and efficient manner.

    14. REDSHIFT DETERMINATION AND CO LINE EXCITATION MODELING FOR THE MULTIPLY LENSED GALAXY HLSW-01

      SciTech Connect (OSTI)

      Scott, K. S.; Lupu, R. E.; Aguirre, J. E.; Auld, R.; Eales, S.; Aussel, H.; Chanial, P.; Beelen, A.; Bock, J.; Bradford, C. M.; Carpenter, J. M.; Cooray, A.; Dowell, C. D.; Brisbin, D.; Burgarella, D.; Chapman, S. C.; Clements, D. L.; Conley, A.; Cox, P.

      2011-05-20

      We report on the redshift measurement and CO line excitation of HERMES J105751.1+573027 (HLSW-01), a strongly lensed submillimeter galaxy discovered in Herschel/SPIRE observations as part of the Herschel Multi-tiered Extragalactic Survey (HerMES). HLSW-01 is an ultra-luminous galaxy with an intrinsic far-infrared luminosity of L{sub FIR} = 1.4 x 10{sup 13} L{sub sun}, and is lensed by a massive group of galaxies into at least four images with a total magnification of {mu} = 10.9 {+-} 0.7. With the 100 GHz instantaneous bandwidth of the Z-Spec instrument on the Caltech Submillimeter Observatory, we robustly identify a redshift of z = 2.958 {+-} 0.007 for this source, using the simultaneous detection of four CO emission lines (J = 7 {yields} 6, J = 8 {yields} 7, J = 9 {yields} 8, and J = 10 {yields} 9). Combining the measured line fluxes for these high-J transitions with the J = 1 {yields} 0, J = 3 {yields} 2, and J = 5 {yields} 4 line fluxes measured with the Green Bank Telescope, the Combined Array for Research in Millimeter Astronomy, and the Plateau de Bure Interferometer, respectively, we model the physical properties of the molecular gas in this galaxy. We find that the full CO spectral line energy distribution is described well by warm, moderate-density gas with T{sub kin} = 86-235 K and n{sub H{sub 2}} = (1.1-3.5) x 10{sup 3} cm{sup -3}. However, it is possible that the highest-J transitions are tracing a small fraction of very dense gas in molecular cloud cores, and two-component models that include a warm/dense molecular gas phase with T{sub kin} {approx} 200 K, n{sub H{sub 2}}{approx}10{sup 5} cm{sup -3} are also consistent with these data. Higher signal-to-noise measurements of the J{sub up} {>=} 7 transitions with high spectral resolution, combined with high spatial resolution CO maps, are needed to improve our understanding of the gas excitation, morphology, and dynamics of this interesting high-redshift galaxy.

    15. Quantitative Determination of the Hubbard Model Phase Diagram from Optical Lattice Experiments by Two-Parameter Scaling

      SciTech Connect (OSTI)

      Campo, V. L. Jr.; Capelle, K.; Quintanilla, J.; Hooley, C.

      2007-12-14

      We propose an experiment to obtain the phase diagram of the fermionic Hubbard model, for any dimensionality, using cold atoms in optical lattices. It is based on measuring the total energy for a sequence of trap profiles. It combines finite-size scaling with an additional 'finite-curvature scaling' necessary to reach the homogeneous limit. We illustrate its viability in the 1D case, simulating experimental data in the Bethe-ansatz local-density approximation. Including experimental errors, the filling corresponding to the Mott transition can be determined with better than 3% accuracy.

    16. Computing and Computational Sciences Directorate - Computer Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      AWARD Winners: Jess Gehin; Jackie Isaacs; Douglas Kothe; Debbie McCoy; Bonnie Nestor; John Turner; Gilbert Weigand Organization(s): Nuclear Technology Program; Computing and...

    17. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

      SciTech Connect (OSTI)

      Joseph, Earl C.; Conway, Steve; Dekate, Chirag

      2013-09-30

      This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size.  A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

    18. MO-G-BRF-05: Determining Response to Anti-Angiogenic Therapies with Monte Carlo Tumor Modeling

      SciTech Connect (OSTI)

      Valentinuzzi, D; Simoncic, U; Jeraj, R; Titz, B

      2014-06-15

      Purpose: Patient response to anti-angiogenic therapies with vascular endothelial growth factor receptor - tyrosine kinase inhibitors (VEGFR TKIs) is heterogeneous. This study investigates key biological characteristics that drive differences in patient response via Monte Carlo computational modeling capable of simulating tumor response to therapy with VEGFR TKI. Methods: VEGFR TKIs potently block receptors, responsible for promoting angiogenesis in tumors. The model incorporates drug pharmacokinetic and pharmacodynamic properties, as well as patientspecific data of cellular proliferation derived from [18F]FLT-PET data. Sensitivity of tumor response was assessed for multiple parameters, including initial partial oxygen tension (pO{sub 2}), cell cycle time, daily vascular growth fraction, and daily vascular regression fraction. Results were benchmarked to clinical data (patient 2 weeks on VEGFR TKI, followed by 1-week drug holiday). The tumor pO{sub 2} was assumed to be uniform. Results: Among the investigated parameters, the simulated proliferation was most sensitive to the initial tumor pO{sub 2}. Initial change of 5 mmHg can already Result in significantly different levels of proliferation. The model reveals that hypoxic tumors (pO{sub 2} ? 20 mmHg) show the highest decrease of proliferation, experiencing mean FLT standardized uptake value (SUVmean) decrease for at least 50% at the end of the clinical trial (day 21). Oxygenated tumors (pO{sub 2} 20 mmHg) show a transient SUV decrease (3050%) at the end of the treatment with VEGFR TKI (day 14) but experience a rapid SUV rebound close to the pre-treatment SUV levels (70110%) at the time of a drug holiday (day 1421) - the phenomenon known as a proliferative flare. Conclusion: Model's high sensitivity to initial pO{sub 2} clearly emphasizes the need for experimental assessment of the pretreatment tumor hypoxia status, as it might be predictive of response to antiangiogenic therapies and the occurrence of proliferative flare. Experimental assessment of other model parameters would further improve understanding of patient response.

    19. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences and Engineering The Computational Sciences and Engineering Division (CSED) is ORNL's premier source of basic and applied research in the field of data sciences and knowledge discovery. CSED's science agenda is focused on research and development related to knowledge discovery enabled by the explosive growth in the availability, size, and variability of dynamic and disparate data sources. This science agenda encompasses data sciences as well as advanced modeling and

    20. Climate Models: Rob Jacob | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      science & technology Environmental modeling tools Programs Mathematics, computing, & computer science Modeling, simulation, & visualization Rob Jacob, Computational Climate...

    1. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      diffuse interface methods in ALE-AMR code with application in modeling NDCX-II experiments Wangyi Liu 1 , John Barnard 2 , Alex Friedman 2 , Nathan Masters 2 , Aaron Fisher 2 , Alice Koniges 2 , David Eder 2 1 LBNL, USA, 2 LLNL, USA This work was part of the Petascale Initiative in Computational Science at NERSC, supported by the Director, Office of Science, Advanced Scientific Computing Research, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work was performed

    2. Methods and computer executable instructions for rapidly calculating simulated particle transport through geometrically modeled treatment volumes having uniform volume elements for use in radiotherapy

      DOE Patents [OSTI]

      Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.

      2001-01-16

      Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of time magnitudes.

    3. Determination of CT number and density profile of binderless, pre-treated and tannin-based Rhizophora spp. particleboards using computed tomography imaging and electron density phantom

      SciTech Connect (OSTI)

      Yusof, Mohd Fahmi Mohd Hamid, Puteri Nor Khatijah Abdul; Tajuddin, Abdul Aziz; Bauk, Sabar; Hashim, Rokiah

      2015-04-29

      Plug density phantoms were constructed in accordance to CT density phantom model 062M CIRS using binderless, pre-treated and tannin-based Rhizophora Spp. particleboards. The Rhizophora Spp. plug phantoms were scanned along with the CT density phantom using Siemens Somatom Definition AS CT scanner at three CT energies of 80, 120 and 140 kVp. 15 slices of images with 1.0?mm thickness each were taken from the central axis of CT density phantom for CT number and CT density profile analysis. The values were compared to water substitute plug phantom from the CT density phantom. The tannin-based Rhizophora Spp. gave the nearest value of CT number to water substitute at 80 and 120 kVp CT energies with ?{sup 2} value of 0.011 and 0.014 respectively while the binderless Rhizphora Spp. gave the nearest CT number to water substitute at 140 kVp CT energy with ?{sup 2} value of 0.023. The tannin-based Rhizophora Spp. gave the nearest CT density profile to water substitute at all CT energies. This study indicated the suitability of Rhizophora Spp. particleboard as phantom material for the use in CT imaging studies.

    4. Analytical and computational study of the ideal full two-fluid plasma model and asymptotic approximations for Hall-magnetohydrodynamics

      SciTech Connect (OSTI)

      Srinivasan, B.; Shumlak, U.

      2011-09-15

      The 5-moment two-fluid plasma model uses Euler equations to describe the ion and electron fluids and Maxwell's equations to describe the electric and magnetic fields. Two-fluid physics becomes significant when the characteristic spatial scales are on the order of the ion skin depth and characteristic time scales are on the order of the ion cyclotron period. The full two-fluid plasma model has disparate characteristic speeds ranging from the ion and electron speeds of sound to the speed of light. Two asymptotic approximations are applied to the full two-fluid plasma to arrive at the Hall-MHD model, namely negligible electron inertia and infinite speed of light. The full two-fluid plasma model and the Hall-MHD model are studied for applications to an electromagnetic plasma shock, geospace environmental modeling (GEM challenge) magnetic reconnection, an axisymmetric Z-pinch, and an axisymmetric field reversed configuration (FRC).

    5. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.

    6. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ADTSC » CCS » CCS-7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These

    7. Use of ARM observations and numerical models to determine radiative and latent heating profiles of mesoscale convective systems for general circulation models

      SciTech Connect (OSTI)

      Houze, Jr., Robert A.

      2013-11-13

      We examined cloud radar data in monsoon climates, using cloud radars at Darwin in the Australian monsoon, on a ship in the Bay of Bengal in the South Asian monsoon, and at Niamey in the West African monsoon. We followed on with a more in-depth study of the continental MCSs over West Africa. We investigated whether the West African anvil clouds connected with squall line MCSs passing over the Niamey ARM site could be simulated in a numerical model by comparing the observed anvil clouds to anvil structures generated by the Weather Research and Forecasting (WRF) mesoscale model at high resolution using six different ice-phase microphysical schemes. We carried out further simulations with a cloud-resolving model forced by sounding network budgets over the Niamey region and over the northern Australian region. We have devoted some of the effort of this project to examining how well satellite data can determine the global breadth of the anvil cloud measurements obtained at the ARM ground sites. We next considered whether satellite data could be objectively analyzed to so that their large global measurement sets can be systematically related to the ARM measurements. Further differences were detailed between the land and ocean MCS anvil clouds by examining the interior structure of the anvils with the satellite-detected the CloudSat Cloud Profiling Radar (CPR). The satellite survey of anvil clouds in the Indo-Pacific region was continued to determine the role of MCSs in producing the cloud pattern associated with the MJO.

    8. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    9. Mechanism and computational model for Lyman-{alpha}-radiation generation by high-intensity-laser four-wave mixing in Kr-Ar gas

      SciTech Connect (OSTI)

      Louchev, Oleg A.; Saito, Norihito; Wada, Satoshi; Bakule, Pavel; Yokoyama, Koji; Ishida, Katsuhiko; Iwasaki, Masahiko

      2011-09-15

      We present a theoretical model combined with a computational study of a laser four-wave mixing process under optical discharge in which the non-steady-state four-wave amplitude equations are integrated with the kinetic equations of initial optical discharge and electron avalanche ionization in Kr-Ar gas. The model is validated by earlier experimental data showing strong inhibition of the generation of pulsed, tunable Lyman-{alpha} (Ly-{alpha}) radiation when using sum-difference frequency mixing of 212.6 nm and tunable infrared radiation (820-850 nm). The rigorous computational approach to the problem reveals the possibility and mechanism of strong auto-oscillations in sum-difference resonant Ly-{alpha} generation due to the combined effect of (i) 212.6-nm (2+1)-photon ionization producing initial electrons, followed by (ii) the electron avalanche dominated by 843-nm radiation, and (iii) the final breakdown of the phase matching condition. The model shows that the final efficiency of Ly-{alpha} radiation generation can achieve a value of {approx}5x10{sup -4} which is restricted by the total combined absorption of the fundamental and generated radiation.

    10. Inter-comparison of Computer Codes for TRISO-based Fuel Micro-Modeling and Performance Assessment

      SciTech Connect (OSTI)

      Brian Boer; Chang Keun Jo; Wen Wu; Abderrafi M. Ougouag; Donald McEachren; Francesco Venneri

      2010-10-01

      The Next Generation Nuclear Plant (NGNP), the Deep Burn Pebble Bed Reactor (DB-PBR) and the Deep Burn Prismatic Block Reactor (DB-PMR) are all based on fuels that use TRISO particles as their fundamental constituent. The TRISO particle properties include very high durability in radiation environments, hence the designs reliance on the TRISO to form the principal barrier to radioactive materials release. This durability forms the basis for the selection of this fuel type for applications such as Deep Bun (DB), which require exposures up to four times those expected for light water reactors. It follows that the study and prediction of the durability of TRISO particles must be carried as part of the safety and overall performance characterization of all the designs mentioned above. Such evaluations have been carried out independently by the performers of the DB project using independently developed codes. These codes, PASTA, PISA and COPA, incorporate models for stress analysis on the various layers of the TRISO particle (and of the intervening matrix material for some of them), model for fission products release and migration then accumulation within the SiC layer of the TRISO particle, just next to the layer, models for free oxygen and CO formation and migration to the same location, models for temperature field modeling within the various layers of the TRISO particle and models for the prediction of failure rates. All these models may be either internal to the code or external. This large number of models and the possibility of different constitutive data and model formulations and the possibility of a variety of solution techniques makes it highly unlikely that the model would give identical results in the modeling of identical situations. The purpose of this paper is to present the results of an inter-comparison between the codes and to identify areas of agreement and areas that need reconciliation. The inter-comparison has been carried out by the cooperating institutions using a set of pre-defined TRISO conditions (burnup levels, temperature or power levels, etc.) and the outcome will be tabulated in the full length paper. The areas of agreement will be pointed out and the areas that require further modeling or reconciliation will be shown. In general the agreement between the codes is good within less than one order of magnitude in the prediction of TRISO failure rates.

    11. WTP Calculation Sheet: Determining the LAW Glass Former Constituents and Amounts for G2 and Acm Models. 24590-LAW-M4C-LFP-00002, Rev. B

      SciTech Connect (OSTI)

      Gimpel, Rodney F.; Kruger, Albert A.

      2013-12-16

      The purpose of this calculation is to determine the LAW glass former recipe and additives with their respective amounts. The methodology and equations contained herein are to be used in the G2 and ACM models until better information is supplied by R&T efforts. This revision includes calculations that determines the mass and volume of the bulk chemicals/minerals needed per batch. Plus, it contains calculations (for the G2 model) to help prevent overflow in LAW Feed Preparation Vessel.

    12. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    13. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved within B 174. Use

    14. Computed solid phases limiting the concentration of dissolved constituents in basalt aquifers of the Columbia Plateau in eastern Washington. Geochemical modeling and nuclide/rock/groundwater interaction studies

      SciTech Connect (OSTI)

      Deutsch, W.J.; Jenne, E.A.; Krupka, K.M.

      1982-08-01

      A speciation-solubility geochemical model, WATEQ2, was used to analyze geographically-diverse, ground-water samples from the aquifers of the Columbia Plateau basalts in eastern Washington. The ground-water samples compute to be at equilibrium with calcite, which provides both a solubility control for dissolved calcium and a pH buffer. Amorphic ferric hydroxide, Fe(OH)/sub 3/(A), is at saturation or modestly oversaturated in the few water samples with measured redox potentials. Most of the ground-water samples compute to be at equilibrium with amorphic silica (glass) and wairakite, a zeolite, and are saturated to oversaturated with respect to allophane, an amorphic aluminosilicate. The water samples are saturated to undersaturated with halloysite, a clay, and are variably oversaturated with regard to other secondary clay minerals. Equilibrium between the ground water and amorphic silica presumably results from the dissolution of the glassy matrix of the basalt. The oversaturation of the clay minerals other than halloysite indicates that their rate of formation lags the dissolution rate of the basaltic glass. The modeling results indicate that metastable amorphic solids limit the concentration of dissolved silicon and suggest the same possibility for aluminum and iron, and that the processes of dissolution of basaltic glass and formation of metastable secondary minerals are continuing even though the basalts are of Miocene age. The computed solubility relations are found to agree with the known assemblages of alteration minerals in the basalt fractures and vesicles. Because the chemical reactivity of the bedrock will influence the transport of solutes in ground water, the observed solubility equilibria are important factors with regard to chemical-retention processes associated with the possible migration of nuclear waste stored in the earth's crust.

    15. Computational Procedures for Determining Parameters in Ramberg...

      Office of Scientific and Technical Information (OSTI)

      Jennings, P. C., "Periodic Response of General Yielding Structure," Journal of the Engineering Mechanics Division, ASCE, Vol. 90, No. EM2, 1964. Jennings, P. C., "Earthquake...

    16. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center for Computational Sciences

    17. Properties of a soft-core model of methanol: An integral equation theory and computer simulation study

      SciTech Connect (OSTI)

      Hu, Matej; Urbic, Tomaz; Muna, Gianmarco

      2014-10-28

      Thermodynamic and structural properties of a coarse-grained model of methanol are examined by Monte Carlo simulations and reference interaction site model (RISM) integral equation theory. Methanol particles are described as dimers formed from an apolar Lennard-Jones sphere, mimicking the methyl group, and a sphere with a core-softened potential as the hydroxyl group. Different closure approximations of the RISM theory are compared and discussed. The liquid structure of methanol is investigated by calculating site-site radial distribution functions and static structure factors for a wide range of temperatures and densities. Results obtained show a good agreement between RISM and Monte Carlo simulations. The phase behavior of methanol is investigated by employing different thermodynamic routes for the calculation of the RISM free energy, drawing gas-liquid coexistence curves that match the simulation data. Preliminary indications for a putative second critical point between two different liquid phases of methanol are also discussed.

    18. Monte Carlo determination of the low-energy constants of a spin-(1/2) Heisenberg model with spatial anisotropy

      SciTech Connect (OSTI)

      Jiang, F.-J.; Nyfeler, M.; Kaempfer, F.

      2009-07-15

      Motivated by the possible mechanism for the pinning of the electronic liquid crystal direction in YBa{sub 2}Cu{sub 3}O{sub 6.45} as proposed by Pardini et al. [Phys. Rev. B 78, 024439 (2008)], we use the first-principles Monte Carlo method to study the spin-(1/2) Heisenberg model with antiferromagnetic couplings J{sub 1} and J{sub 2} on the square lattice. In particular, the low-energy constants spin stiffness {rho}{sub s}, staggered magnetization M{sub s}, and spin wave velocity c are determined by fitting the Monte Carlo data to the predictions of magnon chiral perturbation theory. Further, the spin stiffnesses {rho}{sub s1} and {rho}{sub s2} as a function of the ratio J{sub 2}/J{sub 1} of the couplings are investigated in detail. Although we find a good agreement between our results with those obtained by the series expansion method in the weakly anisotropic regime, for strong anisotropy we observe discrepancies.

    19. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    20. Development of an Extensible Computational Framework for Centralized Storage and Distributed Curation and Analysis of Genomic Data Genome-scale Metabolic Models

      SciTech Connect (OSTI)

      Stevens, Rick

      2010-08-01

      The DOE funded KBase project of the Stevens group at the University of Chicago was focused on four high-level goals: (i) improve extensibility, accessibility, and scalability of the SEED framework for genome annotation, curation, and analysis; (ii) extend the SEED infrastructure to support transcription regulatory network reconstructions (2.1), metabolic model reconstruction and analysis (2.2), assertions linked to data (2.3), eukaryotic annotation (2.4), and growth phenotype prediction (2.5); (iii) develop a web-API for programmatic remote access to SEED data and services; and (iv) application of all tools to bioenergy-related genomes and organisms. In response to these goals, we enhanced and improved the ModelSEED resource within the SEED to enable new modeling analyses, including improved model reconstruction and phenotype simulation. We also constructed a new website and web-API for the ModelSEED. Further, we constructed a comprehensive web-API for the SEED as a whole. We also made significant strides in building infrastructure in the SEED to support the reconstruction of transcriptional regulatory networks by developing a pipeline to identify sets of consistently expressed genes based on gene expression data. We applied this pipeline to 29 organisms, computing regulons which were subsequently stored in the SEED database and made available on the SEED website (http://pubseed.theseed.org). We developed a new pipeline and database for the use of kmers, or short 8-residue oligomer sequences, to annotate genomes at high speed. Finally, we developed the PlantSEED, or a new pipeline for annotating primary metabolism in plant genomes. All of the work performed within this project formed the early building blocks for the current DOE Knowledgebase system, and the kmer annotation pipeline, plant annotation pipeline, and modeling tools are all still in use in KBase today.

    1. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes There are currently 2632 nodes available on PDSF. The compute (batch) nodes at PDSF are heterogenous, reflecting the periodic procurement of new nodes (and the eventual retirement of old nodes). From the user's perspective they are essentially all equivalent except that some have more memory per job slot. If your jobs have memory requirements beyond the default maximum of 1.1GB you should specify that in your job submission and the batch system will run your job on an

    2. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    3. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Exascale Computing CoDEx Project: A Hardware/Software Codesign Environment for the Exascale Era The next decade will see a rapid evolution of HPC node architectures as power and cooling constraints are limiting increases in microprocessor clock speeds and constraining data movement. Applications and algorithms will need to change and adapt as node architectures evolve. A key element of the strategy as we move forward is the co-design of applications, architectures and programming

    4. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    5. CX-008998: Categorical Exclusion Determination | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      8: Categorical Exclusion Determination CX-008998: Categorical Exclusion Determination "Sustainable Manufacturing via Multi-scale Physics-based Process Modeling and Manufacturing-informed Design CX(s) Applied: A9, B3.6 Date: 08/27/2012 Location(s): Minnesota Offices(s): Golden Field Office The U.S. Department of Energy is proposing to provide federal funding to Third Wave Systems Inc (TWS) to conduct research and development to improve computer modeling software for manufacturing

    6. MHD computations for stellarators

      SciTech Connect (OSTI)

      Johnson, J.L.

      1985-12-01

      Considerable progress has been made in the development of computational techniques for studying the magnetohydrodynamic equilibrium and stability properties of three-dimensional configurations. Several different approaches have evolved to the point where comparison of results determined with different techniques shows good agreement. 55 refs., 7 figs.

    7. User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal-energy storage oupled with district-heating or cooling systems. Volume II. Appendices

      SciTech Connect (OSTI)

      Huber, H.D.; Brown, D.R.; Reilly, R.W.

      1982-04-01

      A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. the AQUASTOR Model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two prinicpal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains all the appendices, including supply and distribution system cost equations and models, descriptions of predefined residential districts, key equations for the cooling degree-hour methodology, a listing of the sample case output, and appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

    8. Light output measurements and computational models of microcolumnar CsI scintillators for x-ray imaging

      SciTech Connect (OSTI)

      Nillius, Peter Klamra, Wlodek; Danielsson, Mats; Sibczynski, Pawel; Sharma, Diksha; Badano, Aldo

      2015-02-15

      Purpose: The authors report on measurements of light output and spatial resolution of microcolumnar CsI:Tl scintillator detectors for x-ray imaging. In addition, the authors discuss the results of simulations aimed at analyzing the results of synchrotron and sealed-source exposures with respect to the contributions of light transport to the total light output. Methods: The authors measured light output from a 490-?m CsI:Tl scintillator screen using two setups. First, the authors used a photomultiplier tube (PMT) to measure the response of the scintillator to sealed-source exposures. Second, the authors performed imaging experiments with a 27-keV monoenergetic synchrotron beam and a slit to calculate the total signal generated in terms of optical photons per keV. The results of both methods are compared to simulations obtained with hybridMANTIS, a coupled x-ray, electron, and optical photon Monte Carlo transport package. The authors report line response (LR) and light output for a range of linear absorption coefficients and describe a model that fits at the same time the light output and the blur measurements. Comparing the experimental results with the simulations, the authors obtained an estimate of the absorption coefficient for the model that provides good agreement with the experimentally measured LR. Finally, the authors report light output simulation results and their dependence on scintillator thickness and reflectivity of the backing surface. Results: The slit images from the synchrotron were analyzed to obtain a total light output of 48 keV{sup ?1} while measurements using the fast PMT instrument setup and sealed-sources reported a light output of 28 keV{sup ?1}. The authors attribute the difference in light output estimates between the two methods to the difference in time constants between the camera and PMT measurements. Simulation structures were designed to match the light output measured with the camera while providing good agreement with the measured LR resulting in a bulk absorption coefficient of 5 10{sup ?5} ?m{sup ?1}. Conclusions: The combination of experimental measurements for microcolumnar CsI:Tl scintillators using sealed-sources and synchrotron exposures with results obtained via simulation suggests that the time course of the emission might play a role in experimental estimates. The procedure yielded an experimentally derived linear absorption coefficient for microcolumnar Cs:Tl of 5 10{sup ?5} ?m{sup ?1}. To the authors knowledge, this is the first time this parameter has been validated against experimental observations. The measurements also offer insight into the relative role of optical transport on the effective optical yield of the scintillator with microcolumnar structure.

    9. Free energy of RNA-counterion interactions in a tight-binding model computed by a discrete space mapping

      SciTech Connect (OSTI)

      Henke, Paul S.; Mak, Chi H.

      2014-08-14

      The thermodynamic stability of a folded RNA is intricately tied to the counterions and the free energy of this interaction must be accounted for in any realistic RNA simulations. Extending a tight-binding model published previously, in this paper we investigate the fundamental structure of charges arising from the interaction between small functional RNA molecules and divalent ions such as Mg{sup 2+} that are especially conducive to stabilizing folded conformations. The characteristic nature of these charges is utilized to construct a discretely connected energy landscape that is then traversed via a novel application of a deterministic graph search technique. This search method can be incorporated into larger simulations of small RNA molecules and provides a fast and accurate way to calculate the free energy arising from the interactions between an RNA and divalent counterions. The utility of this algorithm is demonstrated within a fully atomistic Monte Carlo simulation of the P4-P6 domain of the Tetrahymena group I intron, in which it is shown that the counterion-mediated free energy conclusively directs folding into a compact structure.

    10. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home › About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and Computational Sciences Directorate Michael Bartell Chief Information Officer Information Technologies Services Division Jim Hack Director, Climate Science Institute National Center for Computational Sciences Shaun Gleason Division Director Computational Sciences and Engineering Barney Maccabe Division Director Computer Science

    11. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes MC-proc.png Compute Node Configuration 6,384 nodes 2 twelve-core AMD 'MagnyCours' 2.1-GHz processors per node (see die image to the right and schematic below) 24 cores per node (153,216 total cores) 32 GB DDR3 1333-MHz memory per node (6,000 nodes) 64 GB DDR3 1333-MHz memory per node (384 nodes) Peak Gflop/s rate: 8.4 Gflops/core 201.6 Gflops/node 1.28 Peta-flops for the entire machine Each core has its own L1 and L2 caches, with 64 KB and 512KB respectively One 6-MB

    12. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Resources This page is the repository for sundry items of information relevant to general computing on BooNE. If you have a question or problem that isn't answered here, or a suggestion for improving this page or the information on it, please mail boone-computing@fnal.gov and we'll do our best to address any issues. Note about this page Some links on this page point to www.everything2.com, and are meant to give an idea about a concept or thing without necessarily wading through a whole website

    13. Sandia Energy - Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science Home Energy Research Advanced Scientific Computing Research (ASCR) Computational Science Computational Sciencecwdd2015-03-26T13:35:2...

    14. Technology for Increasing Geothermal Energy Productivity. Computer Models to Characterize the Chemical Interactions of Goethermal Fluids and Injectates with Reservoir Rocks, Wells, Surface Equiptment

      SciTech Connect (OSTI)

      Nancy Moller Weare

      2006-07-25

      This final report describes the results of a research program we carried out over a five-year (3/1999-9/2004) period with funding from a Department of Energy geothermal FDP grant (DE-FG07-99ID13745) and from other agencies. The goal of research projects in this program were to develop modeling technologies that can increase the understanding of geothermal reservoir chemistry and chemistry-related energy production processes. The ability of computer models to handle many chemical variables and complex interactions makes them an essential tool for building a fundamental understanding of a wide variety of complex geothermal resource and production chemistry. With careful choice of methodology and parameterization, research objectives were to show that chemical models can correctly simulate behavior for the ranges of fluid compositions, formation minerals, temperature and pressure associated with present and near future geothermal systems as well as for the very high PT chemistry of deep resources that is intractable with traditional experimental methods. Our research results successfully met these objectives. We demonstrated that advances in physical chemistry theory can be used to accurately describe the thermodynamics of solid-liquid-gas systems via their free energies for wide ranges of composition (X), temperature and pressure. Eight articles on this work were published in peer-reviewed journals and in conference proceedings. Four are in preparation. Our work has been presented at many workshops and conferences. We also considerably improved our interactive web site (geotherm.ucsd.edu), which was in preliminary form prior to the grant. This site, which includes several model codes treating different XPT conditions, is an effective means to transfer our technologies and is used by the geothermal community and other researchers worldwide. Our models have wide application to many energy related and other important problems (e.g., scaling prediction in petroleum production systems, stripping towers for mineral production processes, nuclear waste storage, CO2 sequestration strategies, global warming). Although funding decreases cut short completion of several research activities, we made significant progress on these abbreviated projects.

    15. Sandia Energy Computational Modeling & Simulation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      new-crew-database-receives-first-set-of-datafeed 0 Aerodynamic Wind-Turbine Blade Design for the National Rotor Testbed http:energy.sandia.govaerodynamic-wind-turbin...

    16. SU-E-J-39: Comparison of PTV Margins Determined by In-Room Stereoscopic Image Guidance and by On-Board Cone Beam Computed Tomography Technique for Brain Radiotherapy Patients

      SciTech Connect (OSTI)

      Ganesh, T; Paul, S; Munshi, A; Sarkar, B; Krishnankutty, S; Sathya, J; George, S; Jassal, K; Roy, S; Mohanti, B

      2014-06-01

      Purpose: Stereoscopic in room kV image guidance is a faster tool in daily monitoring of patient positioning. Our centre, for the first time in the world, has integrated such a solution from BrainLAB (ExacTrac) with Elekta's volumetric cone beam computed tomography (XVI). Using van Herk's formula, we compared the planning target volume (PTV) margins calculated by both these systems for patients treated with brain radiotherapy. Methods: For a total of 24 patients who received partial or whole brain radiotherapy, verification images were acquired for 524 treatment sessions by XVI and for 334 sessions by ExacTrac out of the total 547 sessions. Systematic and random errors were calculated in cranio-caudal, lateral and antero-posterior directions for both techniques. PTV margins were then determined using van Herk formula. Results: In the cranio-caudal direction, systematic error, random error and the calculated PTV margin were found to be 0.13 cm, 0.12 cm and 0.41 cm with XVI and 0.14 cm, 0.13 cm and 0.44 cm with ExacTrac. The corresponding values in lateral direction were 0.13 cm 0.1 cm and 0.4 cm with XVI and 0.13 cm, 0.12 cm and 0.42 cm with ExacTrac imaging. The same parameters for antero-posterior were for 0.1 cm, 0.11 cm and 0.34 cm with XVI and 0.13 cm, 0.16 cm and 0.43 cm with ExacTrac imaging. The margins estimated with the two imaging modalities were comparable within ± 1 mm limit. Conclusion: Verification of setup errors in the major axes by two independent imaging systems showed the results are comparable and within ± 1 mm. This implies that planar imaging based ExacTrac can yield equal accuracy in setup error determination as the time consuming volumetric imaging which is considered as the gold standard. Accordingly PTV margins estimated by this faster imaging technique can be confidently used in clinical setup.

    17. Criticality Model

      SciTech Connect (OSTI)

      A. Alsaed

      2004-09-14

      The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality computational method will be used for evaluating the criticality potential of configurations of fissionable materials (in-package and external to the waste package) within the repository at Yucca Mountain, Nevada for all waste packages/waste forms. The criticality computational method is also applicable to preclosure configurations. The criticality computational method is a component of the methodology presented in ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003). How the criticality computational method fits in the overall disposal criticality analysis methodology is illustrated in Figure 1 (YMP 2003, Figure 3). This calculation will not provide direct input to the total system performance assessment for license application. It is to be used as necessary to determine the criticality potential of configuration classes as determined by the configuration probability analysis of the configuration generator model (BSC 2003a).

    18. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    19. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Events Computing Events Spotlighting the most advanced scientific and technical applications in the world! Featuring exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies will be seen for the first time in Denver. Supercomputing Conference 13 Denver, Colorado November 17-22, 2013 Spotlighting the most advanced scientific and technical applications in the world, SC13 will bring together the international

    20. Communication with U.S. federal decision makers : a primer with notes on the use of computer models as a means of communication.

      SciTech Connect (OSTI)

      Webb, Erik Karl; Tidwell, Vincent Carroll

      2009-10-01

      This document outlines ways to more effectively communicate with U.S. Federal decision makers by outlining the structure, authority, and motivations of various Federal groups, how to find the trusted advisors, and how to structure communication. All three branches of Federal governments have decision makers engaged in resolving major policy issues. The Legislative Branch (Congress) negotiates the authority and the resources that can be used by the Executive Branch. The Executive Branch has some latitude in implementation and prioritizing resources. The Judicial Branch resolves disputes. The goal of all decision makers is to choose and implement the option that best fits the needs and wants of the community. However, understanding the risk of technical, political and/or financial infeasibility and possible unintended consequences is extremely difficult. Primarily, decision makers are supported in their deliberations by trusted advisors who engage in the analysis of options as well as the day-to-day tasks associated with multi-party negotiations. In the best case, the trusted advisors use many sources of information to inform the process including the opinion of experts and if possible predictive analysis from which they can evaluate the projected consequences of their decisions. The paper covers the following: (1) Understanding Executive and Legislative decision makers - What can these decision makers do? (2) Finding the target audience - Who are the internal and external trusted advisors? (3) Packaging the message - How do we parse and integrate information, and how do we use computer simulation or models in policy communication?

    1. Categorical Exclusion Determinations: A9 | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      A9 Categorical Exclusion Determinations: A9 Existing Regulations A9: Information gathering, analysis, and dissemination Information gathering (including, but not limited to, literature surveys, inventories, site visits, and audits), data analysis (including, but not limited to, computer modeling), document preparation (including, but not limited to, conceptual design, feasibility studies, and analytical energy supply and demand studies), and information dissemination (including, but not limited

    2. Use of molecular modeling to determine the interaction and competition of gases within coal for carbon dioxide sequestration

      SciTech Connect (OSTI)

      Jeffrey D. Evanseck; Jeffry D. Madura; Jonathan P. Mathews

      2006-04-21

      Molecular modeling was employed to both visualize and probe our understanding of carbon dioxide sequestration within a bituminous coal. A large-scale (>20,000 atoms) 3D molecular representation of Pocahontas No. 3 coal was generated. This model was constructed based on a the review data of Stock and Muntean, oxidation and decarboxylation data for aromatic clustersize frequency of Stock and Obeng, and the combination of Laser Desorption Mass Spectrometry data with HRTEM, enabled the inclusion of a molecular weight distribution. The model contains 21,931 atoms, with a molecular mass of 174,873 amu, and an average molecular weight of 714 amu, with 201 structural components. The structure was evaluated based on several characteristics to ensure a reasonable constitution (chemical and physical representation). The helium density of Pocahontas No. 3 coal is 1.34 g/cm{sup 3} (dmmf) and the model was 1.27 g/cm{sup 3}. The structure is microporous, with a pore volume comprising 34% of the volume as expected for a coal of this rank. The representation was used to visualize CO{sub 2}, and CH{sub 4} capacity, and the role of moisture in swelling and CO{sub 2}, and CH{sub 4} capacity reduction. Inclusion of 0.68% moisture by mass (ash-free) enabled the model to swell by 1.2% (volume). Inclusion of CO{sub 2} enabled volumetric swelling of 4%.

    3. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Users will be facing increased complexity in the memory subsystem and node architecture. System designs and programming models will have to evolve to face these new...

    4. Computing and Computational Sciences Directorate - Computer Science and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics Division Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Our mission includes basic research in computational sciences and application of advanced computing systems, computational, mathematical and analysis techniques to the solution of scientific problems of national importance. We seek to work

    5. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Jefferson Lab Jefferson Lab Home Search Contact JLab Computing at JLab ---------------------- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org

    6. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    7. CX-011580: Categorical Exclusion Determination | Department of Energy

      Office of Environmental Management (EM)

      580: Categorical Exclusion Determination CX-011580: Categorical Exclusion Determination Model Validation Using Computational Fluid Dynamics (CFD)-Grade Experimental Database for Next Generation Nuclear Plant (NGNP) Reactor Cavity Cooling Systems with Water and Air CX(s) Applied: B3.6 Date: 11/13/2013 Location(s): Michigan Offices(s): Idaho Operations Office The University of Michigan proposes to investigate specific thermal-hydraulic issues, which will ensure that the Reactor Cavity Cooling

    8. Determination of equilibrium electron temperature and times using an electron swarm model with BOLSIG+ calculated collision frequencies and rate coefficients

      SciTech Connect (OSTI)

      Pusateri, Elise N.; Morris, Heidi E.; Nelson, Eric M.; Ji, Wei

      2015-08-04

      Electromagnetic pulse (EMP) events produce low-energy conduction electrons from Compton electron or photoelectron ionizations with air. It is important to understand how conduction electrons interact with air in order to accurately predict EMP evolution and propagation. An electron swarm model can be used to monitor the time evolution of conduction electrons in an environment characterized by electric field and pressure. Here a swarm model is developed that is based on the coupled ordinary differential equations (ODEs) described by Higgins et al. (1973), hereinafter HLO. The ODEs characterize the swarm electric field, electron temperature, electron number density, and drift velocity. Important swarm parameters, the momentum transfer collision frequency, energy transfer collision frequency, and ionization rate, are calculated and compared to the previously reported fitted functions given in HLO. These swarm parameters are found using BOLSIG+, a two term Boltzmann solver developed by Hagelaar and Pitchford (2005), which utilizes updated cross sections from the LXcat website created by Pancheshnyi et al. (2012). We validate the swarm model by comparing to experimental effective ionization coefficient data in Dutton (1975) and drift velocity data in Ruiz-Vargas et al. (2010). In addition, we report on electron equilibrium temperatures and times for a uniform electric field of 1 StatV/cm for atmospheric heights from 0 to 40 km. We show that the equilibrium temperature and time are sensitive to the modifications in the collision frequencies and ionization rate based on the updated electron interaction cross sections.

    9. Determination of equilibrium electron temperature and times using an electron swarm model with BOLSIG+ calculated collision frequencies and rate coefficients

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Pusateri, Elise N.; Morris, Heidi E.; Nelson, Eric M.; Ji, Wei

      2015-08-04

      Electromagnetic pulse (EMP) events produce low-energy conduction electrons from Compton electron or photoelectron ionizations with air. It is important to understand how conduction electrons interact with air in order to accurately predict EMP evolution and propagation. An electron swarm model can be used to monitor the time evolution of conduction electrons in an environment characterized by electric field and pressure. Here a swarm model is developed that is based on the coupled ordinary differential equations (ODEs) described by Higgins et al. (1973), hereinafter HLO. The ODEs characterize the swarm electric field, electron temperature, electron number density, and drift velocity. Importantmore » swarm parameters, the momentum transfer collision frequency, energy transfer collision frequency, and ionization rate, are calculated and compared to the previously reported fitted functions given in HLO. These swarm parameters are found using BOLSIG+, a two term Boltzmann solver developed by Hagelaar and Pitchford (2005), which utilizes updated cross sections from the LXcat website created by Pancheshnyi et al. (2012). We validate the swarm model by comparing to experimental effective ionization coefficient data in Dutton (1975) and drift velocity data in Ruiz-Vargas et al. (2010). In addition, we report on electron equilibrium temperatures and times for a uniform electric field of 1 StatV/cm for atmospheric heights from 0 to 40 km. We show that the equilibrium temperature and time are sensitive to the modifications in the collision frequencies and ionization rate based on the updated electron interaction cross sections.« less

    10. Determination of equilibrium electron temperature and times using an electron swarm model with BOLSIG+ calculated collision frequencies and rate coefficients

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Pusateri, Elise N.; Morris, Heidi E.; Nelson, Eric M.; Ji, Wei

      2015-08-04

      Electromagnetic pulse (EMP) events produce low-energy conduction electrons from Compton electron or photoelectron ionizations with air. It is important to understand how conduction electrons interact with air in order to accurately predict EMP evolution and propagation. An electron swarm model can be used to monitor the time evolution of conduction electrons in an environment characterized by electric field and pressure. Here a swarm model is developed that is based on the coupled ordinary differential equations (ODEs) described by Higgins et al. (1973), hereinafter HLO. The ODEs characterize the swarm electric field, electron temperature, electron number density, and drift velocity. Importantmore »swarm parameters, the momentum transfer collision frequency, energy transfer collision frequency, and ionization rate, are calculated and compared to the previously reported fitted functions given in HLO. These swarm parameters are found using BOLSIG+, a two term Boltzmann solver developed by Hagelaar and Pitchford (2005), which utilizes updated cross sections from the LXcat website created by Pancheshnyi et al. (2012). We validate the swarm model by comparing to experimental effective ionization coefficient data in Dutton (1975) and drift velocity data in Ruiz-Vargas et al. (2010). In addition, we report on electron equilibrium temperatures and times for a uniform electric field of 1 StatV/cm for atmospheric heights from 0 to 40 km. We show that the equilibrium temperature and time are sensitive to the modifications in the collision frequencies and ionization rate based on the updated electron interaction cross sections.« less

    11. Link failure detection in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-11-09

      Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

    12. Programming models

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Task-based models Task-based models and abstractions (such as offered by CHARM++, Legion and HPX, for example) offer many attractive features for mapping computations onto...

    13. Determination of filter-cake thicknesses from on-line flow measurements and gas/particle transport modeling

      SciTech Connect (OSTI)

      Smith, D.H.; Powell, V.; Ibrahim, E.; Ferer, M.; Ahmadi, G.

      1996-12-31

      The use of cylindrical candle filters to remove fine ({approx}0.005 mm) particles from hot ({approx}500- 900{degrees}C) gas streams currently is being developed for applications in advanced pressurized fluidized bed combustion (PFBC) and integrated gasification combined cycle (IGCC) technologies. Successfully deployed with hot-gas filtration, PFBC and IGCC technologies will allow the conversion of coal to electrical energy by direct passage of the filtered gases into non-ruggedized turbines and thus provide substantially greater conversion efficiencies with reduced environmental impacts. In the usual approach, one or more clusters of candle filters are suspended from a tubesheet in a pressurized (P {approx_lt}1 MPa) vessel into which hot gases and suspended particles enter, the gases pass through the walls of the cylindrical filters, and the filtered particles form a cake on the outside of each filter. The cake is then removed periodically by a backpulse of compressed air from inside the filter, which passes through the filter wall and filter cake. In various development or demonstration systems the thickness of the filter cake has proved to be an important, but unknown, process parameter. This paper describes a physical model for cake and pressure buildups between cleaning backpulses, and for longer term buildups of the ``baseline`` pressure drop, as caused by incomplete filter cleaning and/or re-entrainment. When combined with operating data and laboratory measurements of the cake porosity, the model may be used to calculate the (average) filter permeability, the filter-cake thickness and permeability, and the fraction of filter-cake left on the filter by the cleaning backpulse or re-entrained after the backpulse. When used for a variety of operating conditions (e.g., different coals, sorbents, temperatures, etc.), the model eventually may provide useful information on how the filter-cake properties depend on the various operating parameters.

    14. Computing Resources | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Computing Resources Theory and Computing Sciences Building Argonne's Theory and Computing Sciences (TCS) building houses a wide variety of computing systems including some of the most powerful supercomputers in the world. The facility has 25,000 square feet of raised computer floor space and a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. The building also

    15. Distributed computing for signal processing: modeling of asynchronous parallel computation. Appendix C. Fault-tolerant interconnection networks and image-processing applications for the PASM parallel processing systems. Final report

      SciTech Connect (OSTI)

      Adams, G.B.

      1984-12-01

      The demand for very-high-speed data processing coupled with falling hardware costs has made large-scale parallel and distributed computer systems both desirable and feasible. Two modes of parallel processing are single-instruction stream-multiple data stream (SIMD) and multiple instruction stream - multiple data stream (MIMD). PASM, a partitionable SIMD/MIMD system, is a reconfigurable multimicroprocessor system being designed for image processing and pattern recognition. An important component of these systems is the interconnection network, the mechanism for communication among the computation nodes and memories. Assuring high reliability for such complex systems is a significant task. Thus, a crucial practical aspect of an interconnection network is fault tolerance. In answer to this need, the Extra Stage Cube (ESC), a fault-tolerant, multistage cube-type interconnection network, is defined. The fault tolerance of the ESC is explored for both single and multiple faults, routing tags are defined, and consideration is given to permuting data and partitioning the ESC in the presence of faults. The ESC is compared with other fault-tolerant multistage networks. Finally, reliability of the ESC and an enhanced version of it are investigated.

    16. Use of ARM observations and numerical models to determine radiative and latent heating profiles of mesoscale convective systems for general circulation models

      SciTech Connect (OSTI)

      Tao, Wei-Kuo; Houze, Robert, A., Jr.; Zeng, Xiping

      2013-03-14

      This three-year project, in cooperation with Professor Bob Houze at University of Washington, has been successfully finished as planned. Both ARM (the Atmospheric Radiation Measurement Program) data and cloud-resolving model (CRM) simulations were used to identify the water budgets of clouds observed in two international field campaigns. The research results achieved shed light on several key processes of clouds in climate change (or general circulation models), which are summarized below. 1. Revealed the effect of mineral dust on mesoscale convective systems (MCSs) Two international field campaigns near a desert and a tropical coast provided unique data to drive and evaluate CRM simulations, which are TWP-ICE (the Tropical Warm Pool International Cloud Experiment) and AMMA (the African Monsoon Multidisciplinary Analysis). Studies of the two campaign data were contrasted, revealing that much mineral dust can bring about large MCSs via ice nucleation and clouds. This result was reported as a PI presentation in the 3rd ASR Science Team meeting held in Arlington, Virginia in March 2012. A paper on the studies was published in the Journal of the Atmospheric Sciences (Zeng et al. 2013). 2. Identified the effect of convective downdrafts on ice crystal concentration Using the large-scale forcing data from TWP-ICE, ARM-SGP (the Southern Great Plains) and other field campaigns, Goddard CRM simulations were carried out in comparison with radar and satellite observations. The comparison between model and observations revealed that convective downdrafts could increase ice crystal concentration by up to three or four orders, which is a key to quantitatively represent the indirect effects of ice nuclei, a kind of aerosol, on clouds and radiation in the Tropics. This result was published in the Journal of the Atmospheric Sciences (Zeng et al. 2011) and summarized in the DOE/ASR Research Highlights Summaries (see http://www.arm.gov/science/highlights/RMjY5/view). 3. Used radar observations to evaluate model simulations In cooperation with Profs. Bob Houze at University of Washington and Steven Rutledge at Colorado State University, numerical model results were evaluated with observations from W- and C-band radars and CloudSat/TRMM satellites. These studies exhibited some shortcomings of current numerical models, such as too little of thin anvil clouds, directing the future improvement of cloud microphysics parameterization in CRMs. Two papers of Powell et al (2012) and Zeng et al. (2013), summarizing these studies, were published in the Journal of the Atmospheric Sciences. 4. Analyzed the water budgets of MCSs Using ARM data from TWP-ICE, ARM-SGP and other field campaigns, the Goddard CRM simulations were carried out to analyze the water budgets of clouds from TWP-ICE and AMMA. The simulations generated a set of datasets on clouds and radiation, which are available http://cloud.gsfc.nasa.gov/. The cloud datasets were available for modelers and other researchers aiming to improve the representation of cloud processes in multi-scale modeling frameworks, GCMs and climate models. Special datasets, such as 3D cloud distributions every six minutes for TWP-ICE, were requested and generated for ARM/ASR investigators. Data server records show that 86,206 datasets were downloaded by 120 users between April of 2010 and January of 2012. 5. MMF simulations The Goddard MMF (multi-scale modeling framework) has been improved by coupling with the Goddard Land Information System (LIS) and the Goddard Earth Observing System Model, Version 5 (GOES5). It has also been optimized on NASA HEC supercomputers and can be run over 4000 CPUs. The improved MMF with high horizontal resolution (1 x 1 degree) is currently being applied to cases covering 2005 and 2006. The results show that the spatial distribution pattern of precipitation rate is well simulated by the MMF through comparisons with satellite retrievals from the CMOPRH and GPCP data sets. In addition, the MMF results were compared with three reanalyses (MERRA, ERA-Interim and CFSR). Although the MMF tends to produce a higher precipitation rate over some topical regions, it actually well captures the variations in the zonal and meridional means. Among the three reanalyses, ERA-Interim seems to have values close to those of the satellite retrievals especially for GPCP. It is interesting to note that the MMF obtained the best results in the rain forest of Africa even better than those of CFSR and ERA-Interim, when compared to CMORPH. MERRA fails to capture the precipitation in this region. We are now collaborating with Steve Rutledge (CSU) to validate the model results for AMMA 6. MC3E and the diurnal variation of precipitation processes The Midlatitude Continental Convective Clouds Experiment (MC3E) was a joint field campaign between the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility and the NASA Global Precipitation Measurement (GPM) mission Ground Validation (GV) program. It took place in central Oklahoma during the period April 22 _ June 6, 2011. Some of its major objectives involve the use of CRMs in precipitation science such as: (1) testing the fidelity of CRM simulations via intensive statistical comparisons between simulated and observed cloud properties and latent heating fields for a variety of case types, (2) establishing the limits of CRM space-time integration capabilities for quantitative precipitation estimates, and (3) supporting the development and refinement of physically-based GMI, DPR, and DPR-GMI combined retrieval algorithms using ground-based GPM GV Ku-Ka band radar and CRM simulations. The NASA unified WRF model (nu-WRF) was used for real time forecasts during the field campaign, and ten precipitation events were selected for post mission simulations. These events include well-organized squall lines, scattered storms and quasi-linear storms. A paper focused on the diurnal variation of precipitation will be submitted in September 2012. The major highlights are as follows: a. The results indicate that NU-WRF model could capture observed diurnal variation of rainfall (composite not individual); b. NU-WRF model could simulate two different types (propagating and local type) of the diurnal variation of rainfall; c. NU-WRF model simulation show very good agreement with observation in terms of precipitation pattern (linear MCS), radar reflectivity (a second low peak – shallow convection); d. NU-WRF model simulation indicates that the cool-pool dynamic is the main physical process for MCS propagation speed; e. Surface heat fluxes (including land surface model and initial surface condition) do not play a major role in phase of diurnal variation (change rainfall amount slightly); f. Terrain effect is important for initial stage of MCS (rainfall is increased and close to observation by increasing the terrain height that is also close to observed); g. Diurnal variation of radiation is not important for the simulated variation of rainfall. Publications: Zeng, X., W.-K. Tao, S. Powell, R. Houze, Jr., P. Ciesielski, N. Guy, H. Pierce and T. Matsui, 2012: A comparison of the water budgets between clouds from AMMA and TWP-ICE. J. Atmos. Sci., 70, 487-503. Powell, S. W., R. A. Houze, Jr., A. Kumar, and S. A. McFarlane, 2012: Comparison of simulated and observed continental tropical anvil clouds and their radiative heating profiles. J. Atmos. Sci., 69, 2662-2681. Zeng, X., W.-K. Tao, T. Matsui, S. Xie, S. Lang, M. Zhang, D. Starr, and X. Li, 2011: Estimating the Ice Crystal Enhancement Factor in the Tropics. J. Atmos. Sci., 68, 1424-1434. Conferences: Zeng, X., W.-K. Tao, S. Powell, R. Houze, Jr., P. Ciesielski, N. Guy, H. Pierce and T. Matsui, 2012: Comparison of water budget between AMMA and TWP-ICE clouds. The 3rd Annual ASR Science Team Meeting. Arlington, Virginia, Mar. 12-16, 2012. Zeng, X., W.-K. Tao, S. Powell, R. A. Houze Jr., and P. Ciesielski, 2011: Comparing the water budgets between AMMA and TWP-ICE clouds. Fall 2011 ASR Working Group Meeting. Annapolis, September 12-16, 2011. Zeng, X. et al., 2011: Introducing ice nuclei into turbulence parameterizations in CRMs. Fall 2011 ASR Working Group Meeting. Annapolis, September 12-16, 2011.

    17. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    18. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      User Defined Images Archive APEX Home R & D Exascale Computing CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in...

    19. Assessment of Current Process Modeling Approaches to Determine Their Limitations, Applicability and Developments Needed for Long-Fiber Thermoplastic Injection Molded Composites

      SciTech Connect (OSTI)

      Nguyen, Ba Nghiep; Holbery, Jim; Smith, Mark T.; Kunc, Vlastimil; Norris, Robert E.; Phelps, Jay; Tucker III, Charles L.

      2006-11-30

      This report describes the status of the current process modeling approaches to predict the behavior and flow of fiber-filled thermoplastics under injection molding conditions. Previously, models have been developed to simulate the injection molding of short-fiber thermoplastics, and an as-formed composite part or component can then be predicted that contains a microstructure resulting from the constituents material properties and characteristics as well as the processing parameters. Our objective is to assess these models in order to determine their capabilities and limitations, and the developments needed for long-fiber injection-molded thermoplastics (LFTs). First, the concentration regimes are summarized to facilitate the understanding of different types of fiber-fiber interaction that can occur for a given fiber volume fraction. After the formulation of the fiber suspension flow problem and the simplification leading to the Hele-Shaw approach, the interaction mechanisms are discussed. Next, the establishment of the rheological constitutive equation is presented that reflects the coupled flow/orientation nature. The decoupled flow/orientation approach is also discussed which constitutes a good simplification for many applications involving flows in thin cavities. Finally, before outlining the necessary developments for LFTs, some applications of the current orientation model and the so-called modified Folgar-Tucker model are illustrated through the fiber orientation predictions for selected LFT samples.

    20. 2014-12-22 Issuance: Alternative Efficiency Determination Methods, Basic Model Definition, and Compliance for Commercial HVAC, Refrigeration, and Water Heating Equipment; Final Rule

      Broader source: Energy.gov [DOE]

      This document is a pre-publication Federal Register final rule regarding alternative efficiency determination methods, basic model definition, and compliance for commercial HVAC, refrigeration, and water heating equipment , as issued by the Deputy Assistant Secretary for Energy Efficiency on December 22, 2014. Though it is not intended or expected, should any discrepancy occur between the document posted here and the document published in the Federal Register, the Federal Register publication controls. This document is being made available through the Internet solely as a means to facilitate the public's access to this document.

    1. Studies of Ocean Predictability at Decade to Century Time Scales Using a Global Ocean General Circulation Model in a Parallel Computing Environment

      SciTech Connect (OSTI)

      Barnett, T.P.

      1998-11-30

      The objectives of this report are to determine the structure of oceanic natural variability at time scales of decades to centuries, characterize the physical mechanisms responsible for the variability; determine the relative importance of heat, fresh water, and moment fluxes on the variability; determine the predictability of the variability on these times scales. (B204)

    2. Comparison of Hydrodynamic Load Predictions Between Engineering Models and Computational Fluid Dynamics for the OC4-DeepCwind Semi-Submersible: Preprint

      SciTech Connect (OSTI)

      Benitz, M. A.; Schmidt, D. P.; Lackner, M. A.; Stewart, G. M.; Jonkman, J.; Robertson, A.

      2014-09-01

      Hydrodynamic loads on the platforms of floating offshore wind turbines are often predicted with computer-aided engineering tools that employ Morison's equation and/or potential-flow theory. This work compares results from one such tool, FAST, NREL's wind turbine computer-aided engineering tool, and the computational fluid dynamics package, OpenFOAM, for the OC4-DeepCwind semi-submersible analyzed in the International Energy Agency Wind Task 30 project. Load predictions from HydroDyn, the offshore hydrodynamics module of FAST, are compared with high-fidelity results from OpenFOAM. HydroDyn uses a combination of Morison's equations and potential flow to predict the hydrodynamic forces on the structure. The implications of the assumptions in HydroDyn are evaluated based on this code-to-code comparison.

    3. Computational Sciences and Engineering Division

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The Computational Sciences and Engineering Division is a major research division at the Department of Energy's Oak Ridge National Laboratory. CSED develops and applies creative information technology and modeling and simulation research solutions for National Security and National Energy Infrastructure needs. The mission of the Computational Sciences and Engineering Division is to enhance the country's capabilities in achieving important objectives in the areas of national defense, homeland

    4. A toolkit for building earth system models

      SciTech Connect (OSTI)

      Foster, I.

      1993-03-01

      An earth system model is a computer code designed to simulate the interrelated processes that determine the earth's weather and climate, such as atmospheric circulation, atmospheric physics, atmospheric chemistry, oceanic circulation, and biosphere. I propose a toolkit that would support a modular, or object-oriented, approach to the implementation of such models.

    5. A toolkit for building earth system models

      SciTech Connect (OSTI)

      Foster, I.

      1993-03-01

      An earth system model is a computer code designed to simulate the interrelated processes that determine the earth`s weather and climate, such as atmospheric circulation, atmospheric physics, atmospheric chemistry, oceanic circulation, and biosphere. I propose a toolkit that would support a modular, or object-oriented, approach to the implementation of such models.

    6. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost® 3 on the TRACC Cluster Oct. 21-22, 2010 Argonne TRACC Dr. Cezary Bojanowski Dr. Ronald F. Kulak This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small The LS-PrePost Introductory Course was held October 21-22, 2010 at TRACC in West Chicago with interactive participation on-site as well as remotely via the Internet. Intended primarily for finite element analysts with

    7. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      In the early 2000s, members of Fermilab's Computing Division looked ahead to experiments like those at the Large Hadron Collider, which would collect more data than any computing ...

    8. Mira Computational Readiness Assessment | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Mira Computational Readiness Assessment Assess your project's computational readiness for Mira A review of the following computational readiness points in relation to scaling, porting, I/O, memory

    9. Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer Modeling and Analysis Resources PDF icon catcumminscomputingsuccessstorydec2015...

    10. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      applications in the warm dense matter NDCX experiment Wangyi Liu 1 , John Barnard 2 , Alex Friedman 2 , Nathan Masters 2 , Aaron Fisher 2 , Velemir Mlaker 2 , Alice Koniges 2 , David Eder 2 1 LBNL, USA, 2 LLNL, USA This work was part of the Petascale Initiative in Computational Science at NERSC, supported by the Director, Office of Science, Advanced Scientific Computing Research, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. NERSC provided computational resources. Work

    11. Sandia Energy - Computations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computations Home Transportation Energy Predictive Simulation of Engines Reacting Flow Applied Math & Software Computations ComputationsAshley Otero2015-10-30T02:18:51+00:00...

    12. Phase formation sequences in the silicon-phosphorous system : determined by in-situ synchrotron andj conventional x-ray diffraction measurements and predicted by a theoretical model.

      SciTech Connect (OSTI)

      Carlsson, J. R. A.; Clevenger, L.; Madsen, L. D.; Hultman, L.; Li, X.-H.; Jordan-Sweet, J.; Lavoie, C.; Roy, R. A.; Cabral, C., Jr.; Morales, G.; Ludwig, K. L.; Stephenson, G. B.; Hentzell, H. T. G.; Materials Science Division; Linkoeping Univ.; IBM T. J. Watson Research Center; Boston Univ.

      1997-01-01

      The phase formation sequences of Si-P alloy thin films with P concentrations between 20 and 44 at. % have been studied. The samples were annealed at progressively higher temperatures and the newly formed phases were identified both after each annealing step by ex-situ conventional X-ray diffraction (XRD) and continuously by in-situ synchrotron XRD. It was found that Si was the only phase to form in a sample with 20 at.% P since the evaporation of P at the crystallization temperature prevented phosphides from forming. For a sample with 30at.% P, the Si{sub 12}P{sub 5} phase formed prior to the SiP phase. For samples with 35 and 44at.%P, the formation of SiP preceded the formation of the Si{sub 12}P{sub 5} phase. The experimentally determined phase formation sequences were successfully predicted by a proposed model. According to the model, the first and second crystalline phases to form are those with the lowest and next-lowest crystallization temperatures of the competing compounds predicted by the Gibbs free-energy diagram.

    13. Impact analysis on a massively parallel computer

      SciTech Connect (OSTI)

      Zacharia, T.; Aramayo, G.A.

      1994-06-01

      Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper.

    14. A Systematic Comprehensive Computational Model for Stake Estimation in Mission Assurance: Applying Cyber Security Econometrics System (CSES) to Mission Assurance Analysis Protocol (MAAP)

      SciTech Connect (OSTI)

      Abercrombie, Robert K; Sheldon, Frederick T; Grimaila, Michael R

      2010-01-01

      In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper, we discuss how this infrastructure can be used in the subject domain of mission assurance as defined as the full life-cycle engineering process to identify and mitigate design, production, test, and field support deficiencies of mission success. We address the opportunity to apply the Cyberspace Security Econometrics System (CSES) to Carnegie Mellon University and Software Engineering Institute s Mission Assurance Analysis Protocol (MAAP) in this context.

    15. Coupling of Mechanical Behavior of Cell Components to Electrochemical-Thermal Models for Computer-Aided Engineering of Batteries under Abuse (Presentation)

      SciTech Connect (OSTI)

      Pesaran, A.; Wierzbicki, T.; Sahraei, E.; Li, G.; Collins, L.; Sprague, M.; Kim, G. H.; Santhangopalan, S.

      2014-06-01

      The EV Everywhere Grand Challenge aims to produce plug-in electric vehicles as affordable and convenient for the American family as gasoline-powered vehicles by 2022. Among the requirements set by the challenge, electric vehicles must be as safe as conventional vehicles, and EV batteries must not lead to unsafe situations under abuse conditions. NREL's project started in October 2013, based on a proposal in response to the January 2013 DOE VTO FOA, with the goal of developing computer aided engineering tools to accelerate the development of safer lithium ion batteries.

    16. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    17. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    18. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

    19. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    20. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or...

    1. RAMONA-4B a computer code with three-dimensional neutron kinetics for BWR and SBWR system transient - models and correlations

      SciTech Connect (OSTI)

      Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.; Mallen, A.N.; Neymotin, L.Y.

      1998-03-01

      This document describes the major modifications and improvements made to the modeling of the RAMONA-3B/MOD0 code since 1981, when the code description and assessment report was completed. The new version of the code is RAMONA-4B. RAMONA-4B is a systems transient code for application to different versions of Boiling Water Reactors (BWR) such as the current BWR, the Advanced Boiling Water Reactor (ABWR), and the Simplified Boiling Water Reactor (SBWR). This code uses a three-dimensional neutron kinetics model coupled with a multichannel, non-equilibrium, drift-flux, two-phase flow formulation of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients and instability issues. Chapter 1 is an overview of the code`s capabilities and limitations; Chapter 2 discusses the neutron kinetics modeling and the implementation of reactivity edits. Chapter 3 is an overview of the heat conduction calculations. Chapter 4 presents modifications to the thermal-hydraulics model of the vessel, recirculation loop, steam separators, boron transport, and SBWR specific components. Chapter 5 describes modeling of the plant control and safety systems. Chapter 6 presents and modeling of Balance of Plant (BOP). Chapter 7 describes the mechanistic containment model in the code. The content of this report is complementary to the RAMONA-3B code description and assessment document. 53 refs., 81 figs., 13 tabs.

    2. CX-009296: Categorical Exclusion Determination

      Broader source: Energy.gov [DOE]

      Advanced Computational and Modeling Research for the Electric Power System CX(s) Applied: A9 Date: 09/04/2012 Location(s): Tennessee Offices(s): National Energy Technology Laboratory

    3. CX-003549: Categorical Exclusion Determination

      Broader source: Energy.gov [DOE]

      Request for Categorical Exclusion-A for Computer ModelingCX(s) Applied: A9Date: 08/25/2010Location(s): Bloomington, IndianaOffice(s): Fossil Energy, National Energy Technology Laboratory

    4. Numerical computation of Pop plot

      SciTech Connect (OSTI)

      Menikoff, Ralph

      2015-03-23

      The Pop plot — distance-of-run to detonation versus initial shock pressure — is a key characterization of shock initiation in a heterogeneous explosive. Reactive burn models for high explosives (HE) must reproduce the experimental Pop plot to have any chance of accurately predicting shock initiation phenomena. This report describes a methodology for automating the computation of a Pop plot for a specific explosive with a given HE model. Illustrative examples of the computation are shown for PBX 9502 with three burn models (SURF, WSD and Forest Fire) utilizing the xRage code, which is the Eulerian ASC hydrocode at LANL. Comparison of the numerical and experimental Pop plot can be the basis for a validation test or as an aid in calibrating the burn rate of an HE model. Issues with calibration are discussed.

    5. Insights on the binding of thioflavin derivative markers to amyloid fibril models and A?{sub 1-40} fibrils from computational approaches

      SciTech Connect (OSTI)

      Al-Torres, Jorge; Rimola, Albert; Sodupe, Mariona; Rodriguez-Rodrguez, Cristina

      2014-10-06

      The present contribution analyzes the binding of ThT and neutral ThT derivatives to a ?-sheet model by means of quantum chemical calculations. In addition, we study the properties of four molecules: (2-(2-hydroxyphenyl)benzoxazole (HBX), 2-(2-hydroxyphenyl)benzothiazole (HBT) and their respective iodinated compounds, HBXI and HBTI, in binding to amyloid fibril models and A?{sub 1-40}fibrils by using a combination of docking, molecular dynamics and quantum mechanics calculations.

    6. Method for transferring data from an unsecured computer to a secured computer

      DOE Patents [OSTI]

      Nilsen, Curt A. (Castro Valley, CA)

      1997-01-01

      A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

    7. Cosmic Reionization On Computers | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      its Cosmic Reionization On Computers (CROC) project, using the Adaptive Refinement Tree (ART) code as its main simulation tool. An important objective of this research is to make...

    8. Computing and Computational Sciences Directorate - Information...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      cost-effective, state-of-the-art computing capabilities for research and development. ... communicates and manages strategy, policy and finance across the portfolio of IT assets. ...

    9. Computers-BSA.ppt

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers! Boy Scout Troop 405! What is a computer?! Is this a computer?! Charles Babbage: Father of the Computer! 1830s Designed mechanical calculators to reduce human error. *Input device *Memory to store instructions and results *A processors *Output device! Vacuum Tube! Edison 1883 & Lee de Forest 1906 discovered that "vacuum tubes" could serve as electrical switches and amplifiers A switch can be ON (1)" or OFF (0) Electronic computers use Boolean (George Bool 1850) logic

    10. Technical Approach for Determining Key Parameters Needed for Modeling the Performance of Cast Stone for the Integrated Disposal Facility Performance Assessment

      SciTech Connect (OSTI)

      Yabusaki, Steven B.; Serne, R. Jeffrey; Rockhold, Mark L.; Wang, Guohui; Westsik, Joseph H.

      2015-03-30

      Washington River Protection Solutions (WRPS) and its contractors at Pacific Northwest National Laboratory (PNNL) and Savannah River National Laboratory (SRNL) are conducting a development program to develop / refine the cementitious waste form for the wastes treated at the ETF and to provide the data needed to support the IDF PA. This technical approach document is intended to provide guidance to the cementitious waste form development program with respect to the waste form characterization and testing information needed to support the IDF PA. At the time of the preparation of this technical approach document, the IDF PA effort is just getting started and the approach to analyze the performance of the cementitious waste form has not been determined. Therefore, this document looks at a number of different approaches for evaluating the waste form performance and describes the testing needed to provide data for each approach. Though the approach addresses a cementitious secondary aqueous waste form, it is applicable to other waste forms such as Cast Stone for supplemental immobilization of Hanford LAW. The performance of Cast Stone as a physical and chemical barrier to the release of contaminants of concern (COCs) from solidification of Hanford liquid low activity waste (LAW) and secondary wastes processed through the Effluent Treatment Facility (ETF) is of critical importance to the Hanford Integrated Disposal Facility (IDF) total system performance assessment (TSPA). The effectiveness of cementitious waste forms as a barrier to COC release is expected to evolve with time. PA modeling must therefore anticipate and address processes, properties, and conditions that alter the physical and chemical controls on COC transport in the cementitious waste forms over time. Most organizations responsible for disposal facility operation and their regulators support an iterative hierarchical safety/performance assessment approach with a general philosophy that modeling provides the critical link between the short-term understanding from laboratory and field tests, and the prediction of repository performance over repository time frames and scales. One common recommendation is that experiments be designed to permit the appropriate scaling in the models. There is a large contrast in the physical and chemical properties between the Cast Stone waste package and the IDF backfill and surrounding sediments. Cast Stone exhibits low permeability, high tortuosity, low carbonate, high pH, and low Eh whereas the backfill and native sediments have high permeability, low tortuosity, high carbonate, circumneutral pH, and high Eh. These contrasts have important implications for flow, transport, and reactions across the Cast Stone – backfill interface. Over time with transport across the interface and subsequent reactions, the sharp geochemical contrast will blur and there will be a range of spatially-distributed conditions. In general, COC mobility and transport will be sensitive to these geochemical variations, which also include physical changes in porosity and permeability from mineral reactions. Therefore, PA modeling must address processes, properties, and conditions that alter the physical and chemical controls on COC transport in the cementitious waste forms over time. Section 2 of this document reviews past Hanford PAs and SRS Saltstone PAs, which to date have mostly relied on the lumped parameter COC release conceptual models for TSPA predictions, and provides some details on the chosen values for the lumped parameters. Section 3 provides more details on the hierarchical modeling strategy and processes and mechanisms that control COC release. Section 4 summarizes and lists the key parameters for which numerical values are needed to perform PAs. Section 5 provides brief summaries of the methods used to measure the needed parameters and references to get more details.

    11. Theory & Computation > Research > The Energy Materials Center...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Theory & Computation In This Section Computation & Simulation Theory & Computation Computation & Simulation...

    12. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      PVLibMatlab Permalink Gallery Sandia Labs Releases New Version of PVLib Toolbox Modeling, News, Photovoltaic, Solar Sandia Labs Releases New Version of PVLib Toolbox Sandia has released version 1.3 of PVLib, its widely used Matlab toolbox for modeling photovoltaic (PV) power systems. The version 1.3 release includes the following added functions: functions to estimate parameters for popular PV module models, including PVsyst and the CEC '5 parameter' model a new model of the effects of solar

    13. Large Scale Production Computing and Storage Requirements for Advanced

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Scientific Computing Research: Target 2017 Large Scale Production Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2017 ASCRLogo.png This is an invitation-only review organized by the Department of Energy's Office of Advanced Scientific Computing Research (ASCR) and NERSC. The general goal is to determine production high-performance computing, storage, and services that will be needed for ASCR to achieve its science goals through 2017. A specific focus

    14. Pacing a data transfer operation between compute nodes on a parallel computer

      DOE Patents [OSTI]

      Blocksome, Michael A. (Rochester, MN)

      2011-09-13

      Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

    15. Computer memory management system

      DOE Patents [OSTI]

      Kirk, III, Whitson John

      2002-01-01

      A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

    16. ASCR Workshop on Quantum Computing for Science

      SciTech Connect (OSTI)

      Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

      2015-06-01

      This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

    17. USING 3D COMPUTER MODELING, BOREHOLE GEOPHYSICS, AND HIGH CAPACITY PUMPS TO RESTORE PRODUCTION TO MARGINAL WELLS IN THE EAST TEXAS FIELD

      SciTech Connect (OSTI)

      R.L. Bassett

      2003-06-09

      Methods for extending the productive life of marginal wells in the East Texas Field were investigated using advanced computer imaging technology, geophysical tools, and selective perforation of existing wells. Funding was provided by the Department of Energy, TENECO Energy and Schlumberger Wireline and Testing. Drillers' logs for more than 100 wells in proximity to the project lease were acquired, converted to digital format using a numerical scheme, and the data were used to create a 3 Dimensional geological image of the project site. Using the descriptive drillers' logs in numerical format yielded useful cross sections identifying the Woodbine Austin Chalk contact and continuity of sand zones between wells. The geological data provided information about reservoir continuity, but not the amount of remaining oil, this was obtained using selective modern logs. Schlumberger logged the wells through 2 3/8 inch tubing with a new slimhole Reservoir Saturation Tool (RST) which can measure the oil and water content of the existing porosity, using neutron scattering and a gamma ray spectrometer (GST). The tool provided direct measurements of elemental content yielding interpretations of porosity, lithology, and oil and water content, confirming that significant oil saturation still exists, up to 50% in the upper Woodbine sand. Well testing was then begun and at the end of the project new oil was being produced from zones abandoned or bypassed more than 25 years ago.

    18. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw (Los Alamos, NM); Gokhale, Maya B. (Los Alamos, NM); McCabe, Kevin Peter (Los Alamos, NM)

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    19. Computer analyses for the design, operation and safety of new isotope production reactors: A technology status review

      SciTech Connect (OSTI)

      Wulff, W.

      1990-01-01

      A review is presented on the currently available technologies for nuclear reactor analyses by computer. The important distinction is made between traditional computer calculation and advanced computer simulation. Simulation needs are defined to support the design, operation, maintenance and safety of isotope production reactors. Existing methods of computer analyses are categorized in accordance with the type of computer involved in their execution: micro, mini, mainframe and supercomputers. Both general and special-purpose computers are discussed. Major computer codes are described, with regard for their use in analyzing isotope production reactors. It has been determined in this review that conventional systems codes (TRAC, RELAP5, RETRAN, etc.) cannot meet four essential conditions for viable reactor simulation: simulation fidelity, on-line interactive operation with convenient graphics, high simulation speed, and at low cost. These conditions can be met by special-purpose computers (such as the AD100 of ADI), which are specifically designed for high-speed simulation of complex systems. The greatest shortcoming of existing systems codes (TRAC, RELAP5) is their mismatch between very high computational efforts and low simulation fidelity. The drift flux formulation (HIPA) is the viable alternative to the complicated two-fluid model. No existing computer code has the capability of accommodating all important processes in the core geometry of isotope production reactors. Experiments are needed (heat transfer measurements) to provide necessary correlations. It is important for the nuclear community, both in government, industry and universities, to begin to take advantage of modern simulation technologies and equipment. 41 refs.

    20. gdb | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] gdb Using gdb Preliminaries You should prepare a debug version of your code: Compile using -O0 -g If you are using the XL

    1. Allocations | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Allocation Management Determining Allocation Requirements Querying Allocations Using cbank Mira/Cetus/Vesta Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Allocations ALCF resources are primarily used for DOE INCITE and ALCC awarded projects. Additional information on the INCITE program can be found on the DOE INCITE website and the ALCC program can be found on the Office of

    2. Performing a global barrier operation in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

      2014-12-09

      Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.

    3. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Engine Combustion/Modeling - Modelingadmin2015-10-28T01:54:52+00:00 Modelers at the CRF are developing high-fidelity simulation tools for engine combustion and detailed micro-kinetic, surface chemistry modeling tools for catalyst-based exhaust aftertreatment systems. The engine combustion modeling is focused on developing Large Eddy Simulation (LES). LES is being used with closely coupled key target experiments to reveal new understanding of the fundamental processes involved in engine

    4. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reacting Flow/Modeling - Modelingadmin2015-10-28T02:39:13+00:00 Turbulence models typically involve coarse-graining and/or time averaging. Though adequate for modeling mean transport, this approach does not address turbulence-microphysics interactions that are important in combustion processes. Subgrid models are developed to represent these interactions. The CRF has developed a fundamentally different representation of these interactions that does not involve distinct coarse-grained and subgrid

    5. computational-structural-mechanics-training

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Table of Contents Date Location Training Course: HyperMesh and HyperView April 12-14, 2011 Argonne TRACC Argonne, IL Introductory Course: Developing Compute-efficient, Quality Models with LS-PrePost® 3 on the TRACC Cluster October 21-22, 2010 Argonne TRACC West Chicago, IL Modeling and Simulation with LS-DYNA®: Insights into Modeling with a Goal of Providing Credible Predictive Simulations February 11-12, 2010 Argonne TRACC West Chicago, IL Introductory Course: Using LS-OPT® on the TRACC

    6. Fermilab | Science at Fermilab | Computing | High-performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lattice QCD Farm at the Grid Computing Center at Fermilab. Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer can perform billions of multiplication and addition operations each second. High-performance parallel computing becomes necessary when computations become too large or too long to complete on a single such machine. In parallel computing, computations are divided up so that many computers can work on the same problem at

    7. December 2015 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      Dept. of Chemical Engineering; Yarbro, S.L. Los Alamos National Lab., NM (United States) (1997) 66 Computational procedures for determining parameters in Ramberg-Osgood ...

    8. June 2015 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      Including an examination of the Department of Energys position on quality management Bennett, C.T. (1994) 74 Computational procedures for determining parameters in Ramberg-Osgood ...

    9. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      warm dense matter experiments using the 3D ALE-AMR code and the move toward exascale computing Alice Koniges 1,a , Wangyi Liu 1 , John Barnard 2 , Alex Friedman 2 , Grant Logan 1 , David Eder 2 , Aaron Fisher 2 , Nathan Masters 2 , and Andrea Bertozzi 3 1 Lawrence Berkeley National Laboratory 2 Lawrence Livermore National Laboratory 3 University of California, Los Angeles Abstract. The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial

    10. Argonne's Laboratory computing center - 2007 annual report.

      SciTech Connect (OSTI)

      Bair, R.; Pieper, G. W.

      2008-05-28

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    11. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    12. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    13. Cognitive Computing for Security.

      SciTech Connect (OSTI)

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    14. Getting Computer Accounts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Accounts When you first arrive at the lab, you will be presented with lots of forms that must be read and signed in order to get an ID and computer access. You must ensure...

    15. Quantum Process Matrix Computation by Monte Carlo

      Energy Science and Technology Software Center (OSTI)

      2012-09-11

      The software package, processMC, is a python script that allows for the rapid modeling of small , noisy quantum systems and the computation of the averaged quantum evolution map.

    16. BNL ATLAS Grid Computing

      ScienceCinema (OSTI)

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    17. Computing environment logbook

      DOE Patents [OSTI]

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    18. Spent fuel management fee methodology and computer code user's manual.

      SciTech Connect (OSTI)

      Engel, R.L.; White, M.K.

      1982-01-01

      The methodology and computer model described here were developed to analyze the cash flows for the federal government taking title to and managing spent nuclear fuel. The methodology has been used by the US Department of Energy (DOE) to estimate the spent fuel disposal fee that will provide full cost recovery. Although the methodology was designed to analyze interim storage followed by spent fuel disposal, it could be used to calculate a fee for reprocessing spent fuel and disposing of the waste. The methodology consists of two phases. The first phase estimates government expenditures for spent fuel management. The second phase determines the fees that will result in revenues such that the government attains full cost recovery assuming various revenue collection philosophies. These two phases are discussed in detail in subsequent sections of this report. Each of the two phases constitute a computer module, called SPADE (SPent fuel Analysis and Disposal Economics) and FEAN (FEe ANalysis), respectively.

    19. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      WVMinputs-outputs Permalink Gallery Sandia Labs releases wavelet variability model (WVM) Modeling, News, Photovoltaic, Solar Sandia Labs releases wavelet variability model (WVM) When a single solar photovoltaic (PV) module is in full sunlight, then is shaded by a cloud, and is back in full sunlight in a matter of seconds, a sharp dip then increase in power output will result. However, over an entire PV plant, clouds will often uncover some modules even as they cover others, [...] By Andrea

    20. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      A rail tank car of the type used to transport crude oil across North America. Recent incidents have raised concerns about the safety of this practice, which the DOE-DOT-sponsored team is investigating. (photo credit: Harvey Henkelmann) Permalink Gallery Expansion of DOE-DOT Tight Oil Research Work Capabilities, Carbon Capture & Storage, Carbon Storage, Energy, Energy Assurance, Energy Assurance, Fuel Options, Infrastructure Assurance, Infrastructure Security, Modeling, Modeling, Modeling

    1. Applying computationally efficient schemes for biogeochemical cycles

      Office of Scientific and Technical Information (OSTI)

      (ACES4BGC) (Technical Report) | SciTech Connect Applying computationally efficient schemes for biogeochemical cycles (ACES4BGC) Citation Details In-Document Search Title: Applying computationally efficient schemes for biogeochemical cycles (ACES4BGC) NCAR contributed to the ACES4BGC project through software engineering work on aerosol model implementation, build system and script changes, coupler enhancements for biogeochemical tracers, improvements to the Community Land Model (CLM) code and

    2. Modeling shock initiation in Composition B

      SciTech Connect (OSTI)

      Murphy, M.J.; Lee, E.L.; Weston, A.M.; Williams, A.E.

      1993-05-01

      A hydrodynamic modeling study of the shock initiation behavior of Composition B explosive was performed using the {open_quotes}Ignition and Growth of Reaction in High Explosive{close_quotes} model developed at the Lawrence Livermore National Laboratory. The HE (heterogeneous explosives) responses were computed using the CALE and DYNA2D hydrocodes and then compared to experimental results. The data from several standard shock initiation and HE performance experiments was used to determine the parameters required for the model. Simulations of the wedge tests (pop plots) and failure diameter tests were found to be sufficient for defining the ignition and growth parameters used in the two term version of the computational model. These coefficients were then applied in the response analysis of several Composition B impact initiation experiments. A description of the methodology used to determine the coefficients and the resulting range of useful application of the ignition and growth of reaction model is described.

    3. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    4. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    5. BLT-EC (Breach, Leach and Transport-Equilibrium Chemistry) data input guide. A computer model for simulating release and coupled geochemical transport of contaminants from a subsurface disposal facility

      SciTech Connect (OSTI)

      MacKinnon, R.J.; Sullivan, T.M.; Kinsey, R.R.

      1997-05-01

      The BLT-EC computer code has been developed, implemented, and tested. BLT-EC is a two-dimensional finite element computer code capable of simulating the time-dependent release and reactive transport of aqueous phase species in a subsurface soil system. BLT-EC contains models to simulate the processes (container degradation, waste-form performance, transport, chemical reactions, and radioactive production and decay) most relevant to estimating the release and transport of contaminants from a subsurface disposal system. Water flow is provided through tabular input or auxiliary files. Container degradation considers localized failure due to pitting corrosion and general failure due to uniform surface degradation processes. Waste-form performance considers release to be limited by one of four mechanisms: rinse with partitioning, diffusion, uniform surface degradation, and solubility. Transport considers the processes of advection, dispersion, diffusion, chemical reaction, radioactive production and decay, and sources (waste form releases). Chemical reactions accounted for include complexation, sorption, dissolution-precipitation, oxidation-reduction, and ion exchange. Radioactive production and decay in the waste form is simulated. To improve the usefulness of BLT-EC, a pre-processor, ECIN, which assists in the creation of chemistry input files, and a post-processor, BLTPLOT, which provides a visual display of the data have been developed. BLT-EC also includes an extensive database of thermodynamic data that is also accessible to ECIN. This document reviews the models implemented in BLT-EC and serves as a guide to creating input files and applying BLT-EC.

    6. Programs | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      INCITE Program ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Featured Science Snapshot of the global structure of a radiation-dominated accretion flow around a black hole computed using the Athena++ code Magnetohydrodynamic Models of Accretion Including Radiation Transport James Stone Allocation Program: INCITE Allocation Hours: 47 Million

    7. Michael Levitt and Computational Biology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Michael Levitt and Computational Biology Resources with Additional Information * Publications Michael Levitt Courtesy of Linda A. Cicero / Stanford News Service Michael Levitt, PhD, professor of structural biology at the Stanford University School of Medicine, has won the 2013 Nobel Prize in Chemistry. ... Levitt ... shares the ... prize with Martin Karplus ... and Arieh Warshel ... "for the development of multiscale models for complex chemical systems." Levitt's work focuses on

    8. Sandia Energy - High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    9. Closed loop computer control for an automatic transmission

      DOE Patents [OSTI]

      Patil, Prabhakar B.

      1989-01-01

      In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determined from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

    10. System for computer controlled shifting of an automatic transmission

      DOE Patents [OSTI]

      Patil, Prabhakar B.

      1989-01-01

      In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determine from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

    11. Low latency, high bandwidth data communications between compute nodes in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-11-02

      Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

    12. Intranode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

      2013-07-23

      Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

    13. Intranode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

      2014-01-07

      Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a computer node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

    14. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    15. Energy Aware Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy required to compute is to reduce the power usage on a node. One way to accomplish this is by lowering the frequency at which the CPU operates. However, reducing the clock speed increases the time to solution, creating a potential tradeoff. NERSC continues to examine how such methods impact its operations and its

    16. Large Scale Production Computing and Storage Requirements for Basic Energy

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Basic Energy Sciences: Target 2017 BES-Montage.png This is an invitation-only review organized by the Department of Energy's Office of Basic Energy Sciences (BES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The goal is to determine production high-performance computing, storage, and services that will be needed for BES to

    17. QUEST Hanford Site Computer Users - What do they do?

      SciTech Connect (OSTI)

      WITHERSPOON, T.T.

      2000-03-02

      The Fluor Hanford Chief Information Office requested that a computer-user survey be conducted to determine the user's dependence on the computer and its importance to their ability to accomplish their work. Daily use trends and future needs of Hanford Site personal computer (PC) users was also to be defined. A primary objective was to use the data to determine how budgets should be focused toward providing those services that are truly needed by the users.

    18. Personal Computer Inventory System

      Energy Science and Technology Software Center (OSTI)

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    19. Announcement of Computer Software

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      All Other Editions Are Obsolete UNITED STATES DEPARTMENT OF ENERGY ANNOUNCEMENT OF COMPUTER SOFTWARE OMB Control Number 1910-1400 (OMB Burden Disclosure Statement is on last...

    20. Method and system for benchmarking computers

      DOE Patents [OSTI]

      Gustafson, John L.

      1993-09-14

      A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

    1. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      60 Years of Computing 60 Years of Computing

    2. Most Viewed Documents - Mathematics and Computing | OSTI, US...

      Office of Scientific and Technical Information (OSTI)

      - Mathematics and Computing Metaphors for cyber security. Moore, Judy Hennessey; Parrott, Lori K.; Karas, Thomas H. (2008) Staggered-grid finite-difference acoustic modeling with ...

    3. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      in warm dense matter experiments with diffuse interface methods in the ALE-AMR code Wangyi Liu ∗ , John Barnard, Alex Friedman, Nathan Masters, Aaron Fisher, Velemir Mlaker, Alice Koniges, David Eder † August 4, 2011 Abstract In this paper we describe an implementation of a single-fluid inter- face model in the ALE-AMR code to simulate surface tension effects. The model does not require explicit information on the physical state of the two phases. The only change to the existing fluid

    4. The Magellan Final Report on Cloud Computing

      SciTech Connect (OSTI)

      ,; Coghlan, Susan; Yelick, Katherine

      2011-12-21

      The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.

    5. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Capabilities Information Science, Computing, Applied Math science-innovationassetsimagesicon-science.jpg Information Science, Computing, Applied Math National security ...

    6. Electrolux: Compliance Determination (2010-SE-0108)

      Broader source: Energy.gov [DOE]

      After conducting testing of Electrolux's Frigidaire chest freezer model FFN09M5HWC, DOE determined that the model met the applicable energy conservation standard.

    7. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    8. Accelerating Computation of the Unit Commitment Problem (Presentation)

      SciTech Connect (OSTI)

      Hummon, M.; Barrows, C.; Jones, W.

      2013-10-01

      Production cost models (PCMs) simulate power system operation at hourly (or higher) resolution. While computation times often extend into multiple days, the sequential nature of PCM's makes parallelism difficult. We exploit the persistence of unit commitment decisions to select partition boundaries for simulation horizon decomposition and parallel computation. Partitioned simulations are benchmarked against sequential solutions for optimality and computation time.

    9. Scientific computations section monthly report, November 1993

      SciTech Connect (OSTI)

      Buckner, M.R.

      1993-12-30

      This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

    10. SC e-journals, Computer Science

      Office of Scientific and Technical Information (OSTI)

      & Mathematical Organization Theory Computational Complexity Computational Economics Computational Management ... Technology EURASIP Journal on Information Security ...

    11. GPU COMPUTING FOR PARTICLE TRACKING

      SciTech Connect (OSTI)

      Nishimura, Hiroshi; Song, Kai; Muriki, Krishna; Sun, Changchun; James, Susan; Qin, Yong

      2011-03-25

      This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculation of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.

    12. Improved computer models support genetics research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      can lead to major changes, such as an Alzheimer's gene turning on or off or a cancer cell not responding to chemotheraphy. Are these random events due to chance or is there an...

    13. Accelerator Modeling for Discovery | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      identified three scientific drivers that require accelerator-based experiments (using the Higgs boson as a new tool for discovery, pursuing the physics associated with neutrino...

    14. Bayesian approaches for combining computational model output...

      Office of Scientific and Technical Information (OSTI)

      ANL Publication Date: 2011-07-25 OSTI Identifier: 1084581 Report Number(s): LA-UR-11-04315; LA-UR-11-4315 DOE Contract Number: AC52-06NA25396 Resource Type: Conference...

    15. Computational Nanophotonics: modeling optical interactions and...

      Office of Scientific and Technical Information (OSTI)

      This research project was part of a larger research project with the same title led by Stephen Gray at Argonne. A significant amount of our work involved collaborations with Gray,...

    16. modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      modeling - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear Energy

    17. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      NASA Earth at Night Video EC, Energy, Energy Efficiency, Global, Modeling, News & Events, Solid-State Lighting, Videos NASA Earth at Night Video Have you ever wondered what the Earth looks like at night? NASA provides a clear, cloud-free view of the Earth at night using the Suomi National Polar-orbiting Partnership Satellite. The satellite utilizes an instrument known as the Visible Infrared Radiometer Suite (VIIRS), which allows the satellite to capture images of a "remarkably detailed

    18. Local Orthogonal Cutting Method for Computing Medial Curves and Its

      Office of Scientific and Technical Information (OSTI)

      Biomedical Applications (Journal Article) | SciTech Connect Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications Citation Details In-Document Search Title: Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of

    19. Barracuda® Computational Particle Fluid Dynamics (CPFD®) Software |

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Department of Energy Barracuda® Computational Particle Fluid Dynamics (CPFD®) Software Barracuda® Computational Particle Fluid Dynamics (CPFD®) Software Innovative Software Program Extends the Capabilities of CFD by Modeling Solid Particle Movement Invented at the Los Alamos Scientific Laboratory in the 1950s and '60s, computational fluid dynamics (CFD) is a mathematical expression of the physics of the movements of fluids (liquids and gases). CFD computer software simulates real-world

    20. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.