National Library of Energy BETA

Sample records for including computer modeling

  1. Computation of Domain-Averaged Irradiance with a Simple Two-Stream Radiative Transfer Model Including Vertical Cloud Property Correlations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computation of Domain-Averaged Irradiance with a Simple Two-Stream Radiative Transfer Model Including Vertical Cloud Property Correlations S. Kato Center for Atmospheric Sciences Hampton University Hampton, Virginia Introduction Recent development of remote sensing instruments by Atmospheric Radiation Measurement (ARM?) Program provides information of spatial and temporal variability of cloud structures. However it is not clear what cloud properties are required to express complicated cloud

  2. Theory, Modeling and Computation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Theory, Modeling and Computation Theory, Modeling and Computation The sophistication of modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme computing. CONTACT Jack Shlachter (505) 665-1888 Email Extreme Computing to Power Accurate Atomistic Simulations Advances in high-performance computing and theory allow longer and larger atomistic simulations than currently possible.

  3. Computational Modeling | Bioenergy | NREL

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computational Modeling NREL uses computational modeling to increase the efficiency of biomass conversion by rational design using multiscale modeling, applying theoretical approaches, and testing scientific hypotheses. model of enzymes wrapping on cellulose; colorful circular structures entwined through blue strands Cellulosomes are complexes of protein scaffolds and enzymes that are highly effective in decomposing biomass. This is a snapshot of a coarse-grain model of complex cellulosome

  4. Magnetohydrodynamic Models of Accretion Including Radiation Transport |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Argonne Leadership Computing Facility Snapshot of the global structure of a radiation-dominated accretion flow around a black hole computed using the Athena++ code Snapshot of the global structure of a radiation-dominated accretion flow around a black hole computed using the Athena++ code. Left half of the image shows the density (in units of 0.01g/cm^3), and the right half shows the radiation energy density (in units of the energy density for a 10^7 degree black body). Coordinate axes are

  5. Human-computer interface including haptically controlled interactions

    DOE Patents [OSTI]

    Anderson, Thomas G.

    2005-10-11

    The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.

  6. Parallel computing in enterprise modeling.

    SciTech Connect (OSTI)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  7. Measuring and modeling the lifetime of nitrous oxide including...

    Office of Scientific and Technical Information (OSTI)

    Published Article: Measuring and modeling the lifetime of nitrous oxide including its variability: NITROUS OXIDE AND ITS CHANGING LIFETIME Prev Next Title: Measuring and ...

  8. Comparison of Joint Modeling Approaches Including Eulerian Sliding...

    Office of Scientific and Technical Information (OSTI)

    Eulerian Sliding Interfaces Citation Details In-Document Search Title: Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces You are accessing a ...

  9. Computer modeling helps manage wildfires

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer modeling helps manage wildfires Community Connections: Your link to news and opportunities from Los Alamos National Laboratory Latest Issue: September 1, 2016 all issues All Issues » submit Computer modeling helps manage wildfires Technology increases preparedness, improves firefighting strategies. September 1, 2016 Smoke over the Jemez Mountains during the 2011 Las Conchas wildfire. Smoke over the Jemez Mountains during the 2011 Las Conchas wildfire. Contacts Director, Community

  10. LANL computer model boosts engine efficiency

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LANL computer model boosts engine efficiency LANL computer model boosts engine efficiency The KIVA model has been instrumental in helping researchers and manufacturers understand...

  11. A model for heterogeneous materials including phase transformations

    SciTech Connect (OSTI)

    Addessio, F.L.; Clements, B.E.; Williams, T.O.

    2005-04-15

    A model is developed for particulate composites, which includes phase transformations in one or all of the constituents. The model is an extension of the method of cells formalism. Representative simulations for a single-phase, brittle particulate (SiC) embedded in a ductile material (Ti), which undergoes a solid-solid phase transformation, are provided. Also, simulations for a tungsten heavy alloy (WHA) are included. In the WHA analyses a particulate composite, composed of tungsten particles embedded in a tungsten-iron-nickel alloy matrix, is modeled. A solid-liquid phase transformation of the matrix material is included in the WHA numerical calculations. The example problems also demonstrate two approaches for generating free energies for the material constituents. Simulations for volumetric compression, uniaxial strain, biaxial strain, and pure shear are used to demonstrate the versatility of the model.

  12. Improved computer models support genetics research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    February Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and...

  13. A coke oven model including thermal decomposition kinetics of tar

    SciTech Connect (OSTI)

    Munekane, Fuminori; Yamaguchi, Yukio; Tanioka, Seiichi

    1997-12-31

    A new one-dimensional coke oven model has been developed for simulating the amount and the characteristics of by-products such as tar and gas as well as coke. This model consists of both heat transfer and chemical kinetics including thermal decomposition of coal and tar. The chemical kinetics constants are obtained by estimation based on the results of experiments conducted to investigate the thermal decomposition of both coal and tar. The calculation results using the new model are in good agreement with experimental ones.

  14. System Advisor Model Includes Analysis of Hybrid CSP Option ...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    concepts related to power generation have been missing in the System Advisor Model (SAM). One such concept, until now, is a hybrid integrated solar combined-cycle (ISCC)...

  15. Predictive Capability Maturity Model for computational modeling...

    Office of Scientific and Technical Information (OSTI)

    Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 97 MATHEMATICAL METHODS AND COMPUTING; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, ...

  16. Cupola Furnace Computer Process Model

    SciTech Connect (OSTI)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  17. Cement-aggregate compatibility and structure property relationships including modelling

    SciTech Connect (OSTI)

    Jennings, H.M.; Xi, Y.

    1993-07-15

    The role of aggregate, and its interface with cement paste, is discussed with a view toward establishing models that relate structure to properties. Both short (nm) and long (mm) range structure must be considered. The short range structure of the interface depends not only on the physical distribution of the various phases, but also on moisture content and reactivity of aggregate. Changes that occur on drying, i.e. shrinkage, may alter the structure which, in turn, feeds back to alter further drying and shrinkage. The interaction is dynamic, even without further hydration of cement paste, and the dynamic characteristic must be considered in order to fully understand and model its contribution to properties. Microstructure and properties are two subjects which have been pursued somewhat separately. This review discusses both disciplines with a view toward finding common research goals in the future. Finally, comment is made on possible chemical reactions which may occur between aggregate and cement paste.

  18. Computable General Equilibrium Models for Sustainability Impact...

    Open Energy Info (EERE)

    Publications, Softwaremodeling tools User Interface: Other Website: iatools.jrc.ec.europa.eudocsecolecon2006.pdf Computable General Equilibrium Models for Sustainability...

  19. Generalized Modeling of Enrichment Cascades That Include Minor Isotopes

    SciTech Connect (OSTI)

    Weber, Charles F

    2012-01-01

    The monitoring of enrichment operations may require innovative analysis to allow for imperfect or missing data. The presence of minor isotopes may help or hurt - they can complicate a calculation or provide additional data to corroborate a calculation. However, they must be considered in a rigorous analysis, especially in cases involving reuse. This study considers matched-abundanceratio cascades that involve at least three isotopes and allows generalized input that does not require all feed assays or the enrichment factor to be specified. Calculations are based on the equations developed for the MSTAR code but are generalized to allow input of various combinations of assays, flows, and other cascade properties. Traditional cascade models have required specification of the enrichment factor, all feed assays, and the product and waste assays of the primary enriched component. The calculation would then produce the numbers of stages in the enriching and stripping sections and the remaining assays in waste and product streams. In cases where the enrichment factor or feed assays were not known, analysis was difficult or impossible. However, if other quantities are known (e.g., additional assays in waste or product streams), a reliable calculation is still possible with the new code, but such nonstandard input may introduce additional numerical difficulties into the calculation. Thus, the minimum input requirements for a stable solution are discussed, and a sample problem with a non-unique solution is described. Both heuristic and mathematically required guidelines are given to assist the application of cascade modeling to situations involving such non-standard input. As a result, this work provides both a calculational tool and specific guidance for evaluation of enrichment cascades in which traditional input data are either flawed or unknown. It is useful for cases involving minor isotopes, especially if the minor isotope assays are desired (or required) to be

  20. Climate Modeling using High-Performance Computing

    SciTech Connect (OSTI)

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  1. Improved computer models support genetics research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    February » Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and computational methods provide insight into why genes are activated. February 8, 2013 When complete, these barriers will be a portion of the NMSSUP upgrade. This molecular structure depicts a yeast transfer ribonucleic acid (tRNA), which carries a single amino acid to the ribosome during protein construction. A combined

  2. Improved computer models support genetics research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and computational methods provide insight into why genes are activated. February 8, 2013 When complete, these barriers will be a portion of the NMSSUP upgrade. This molecular structure depicts a yeast transfer ribonucleic acid (tRNA), which carries a single amino acid to the ribosome during protein construction. A combined experimental and

  3. Low Mach Number Models in Computational Astrophysics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Ann Almgren Low Mach Number Models in Computational Astrophysics February 4, 2014 Ann Almgren. Berkeley Lab Downloads Almgren-nug2014.pdf | Adobe Acrobat PDF file Low Mach Number Models in Computational Astrophysics - Ann Almgren, Berkeley Lab Last edited: 2016-04-29 11:34:50

  4. Computational social dynamic modeling of group recruitment.

    SciTech Connect (OSTI)

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  5. LANL computer model boosts engine efficiency

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LANL computer model boosts engine efficiency LANL computer model boosts engine efficiency The KIVA model has been instrumental in helping researchers and manufacturers understand combustion processes, accelerate engine development and improve engine design and efficiency. September 25, 2012 KIVA simulation of an experimental engine with DOHC quasi-symmetric pent-roof combustion chamber and 4 valves. KIVA simulation of an experimental engine with DOHC quasi-symmetric pent-roof combustion chamber

  6. Section 23: Models and Computer Codes

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Application-2014 for the Waste Isolation Pilot Plant Models and Computer Codes (40 CFR § 194.23) United States Department of Energy Waste Isolation Pilot Plant Carlsbad Field Office Carlsbad, New Mexico Compliance Recertification Application 2014 Models and Computer Codes (40 CFR § 194.23) Table of Contents 23.0 Models and Computer Codes (40 CFR § 194.23) 23.1 Requirements 23.2 40 CFR § 194.23(a)(1) 23.2.1 Background 23.2.2 1998 Certification Decision 23.2.3 Changes in the CRA-2004 23.2.4

  7. HIV virus spread and evolution studied through computer modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    HIV and evolution studied through computer modeling HIV virus spread and evolution studied through computer modeling This approach distinguishes between susceptible and infected individuals to capture the full infection history, including contact tracing data for infected individuals. November 19, 2013 Scanning electron micrograph of HIV-1 budding (in green) from cultured lymphocytes. The image has been colored to highlight important features. Scanning electron micrograph of HIV-1 budding (in

  8. Computer Model Buildings Contaminated with Radioactive Material

    Energy Science and Technology Software Center (OSTI)

    1998-05-19

    The RESRAD-BUILD computer code is a pathway analysis model designed to evaluate the potential radiological dose incurred by an individual who works or lives in a building contaminated with radioactive material.

  9. Significant Enhancement of Computational Efficiency in Nonlinear Multiscale Battery Model for Computer Aided Engineering

    SciTech Connect (OSTI)

    Smith, Kandler; Graf, Peter; Jun, Myungsoo; Yang, Chuanbo; Li, Genong; Li, Shaoping; Hochman, Amit; Tselepidakis, Dimitrios

    2015-06-09

    This presentation provides an update on improvements in computational efficiency in a nonlinear multiscale battery model for computer aided engineering.

  10. RELAP5-3D Code Includes Athena Features and Models

    SciTech Connect (OSTI)

    Richard A. Riemke; Cliff B. Davis; Richard R. Schultz

    2006-07-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, sf6, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5- 3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper.

  11. RELAP5-3D Code Includes ATHENA Features and Models

    SciTech Connect (OSTI)

    Riemke, Richard A.; Davis, Cliff B.; Schultz, Richard R.

    2006-07-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, SF{sub 6}, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5-3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper. (authors)

  12. CDF computing and event data models

    SciTech Connect (OSTI)

    Snider, F.D.; /Fermilab

    2005-12-01

    The authors discuss the computing systems, usage patterns and event data models used to analyze Run II data from the CDF-II experiment at the Tevatron collider. A critical analysis of the current implementation and design reveals some of the stronger and weaker elements of the system, which serve as lessons for future experiments. They highlight a need to maintain simplicity for users in the face of an increasingly complex computing environment.

  13. Computational Tools for Predictive Modeling of Properties in...

    Office of Scientific and Technical Information (OSTI)

    Book: Computational Tools for Predictive Modeling of Properties in Complex Actinide Systems Citation Details In-Document Search Title: Computational Tools for Predictive Modeling ...

  14. Computational Fluid Dynamics Modeling of Diesel Engine Combustion...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Computational Fluid Dynamics Modeling of Diesel Engine Combustion and Emissions Computational Fluid Dynamics Modeling of Diesel Engine Combustion and Emissions 2005 Diesel Engine ...

  15. MaRIE theory, modeling and computation roadmap executive summary...

    Office of Scientific and Technical Information (OSTI)

    Conference: MaRIE theory, modeling and computation roadmap executive summary Citation Details In-Document Search Title: MaRIE theory, modeling and computation roadmap executive ...

  16. Computer Modeling of Chemical and Geochemical Processes in High...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer modeling of chemical and geochemical processes in high ionic strength solutions ... in brine Computer modeling of chemical and geochemical processes in high ionic ...

  17. Towards a Computational Model of a Methane Producing Archaeum...

    Office of Scientific and Technical Information (OSTI)

    Towards a Computational Model of a Methane Producing Archaeum Citation Details In-Document Search Title: Towards a Computational Model of a Methane Producing Archaeum Authors: ...

  18. Computationally Efficient Modeling of High-Efficiency Clean Combustion...

    Broader source: Energy.gov (indexed) [DOE]

    More Documents & Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines Computationally Efficient Modeling of High-Efficiency Clean Combustion ...

  19. Computer modeling of the global warming effect

    SciTech Connect (OSTI)

    Washington, W.M.

    1993-12-31

    The state of knowledge of global warming will be presented and two aspects examined: observational evidence and a review of the state of computer modeling of climate change due to anthropogenic increases in greenhouse gases. Observational evidence, indeed, shows global warming, but it is difficult to prove that the changes are unequivocally due to the greenhouse-gas effect. Although observational measurements of global warming are subject to ``correction,`` researchers are showing consistent patterns in their interpretation of the data. Since the 1960s, climate scientists have been making their computer models of the climate system more realistic. Models started as atmospheric models and, through the addition of oceans, surface hydrology, and sea-ice components, they then became climate-system models. Because of computer limitations and the limited understanding of the degree of interaction of the various components, present models require substantial simplification. Nevertheless, in their present state of development climate models can reproduce most of the observed large-scale features of the real system, such as wind, temperature, precipitation, ocean current, and sea-ice distribution. The use of supercomputers to advance the spatial resolution and realism of earth-system models will also be discussed.

  20. District-heating strategy model: computer programmer's manual

    SciTech Connect (OSTI)

    Kuzanek, J.F.

    1982-05-01

    The US Department of Housing and Urban Development (HUD) and the US Department of Energy (DOE) cosponsor a program aimed at increasing the number of district heating and cooling (DHC) systems. Such systems can reduce the amount and costs of fuels used to heat and cool buildings in a district. Twenty-eight communities have agreed to aid HUD in a national feasibility assessment of DHC systems. The HUD/DOE program entails technical assistance by Argonne National Laboratory and Oak Ridge National Laboratory. The assistance includes a computer program, called the district heating strategy model (DHSM), that performs preliminary calculations to analyze potential DHC systems. This report describes the general capabilities of the DHSM, provides historical background on its development, and explains the computer installation and operation of the model - including the data file structures and the options. Sample problems illustrate the structure of the various input data files, the interactive computer-output listings. The report is written primarily for computer programmers responsible for installing the model on their computer systems, entering data, running the model, and implementing local modifications to the code.

  1. Significant Enhancement of Computational Efficiency in Nonlinear Multiscale Battery Model for Computer Aided Engineering (Presentation)

    SciTech Connect (OSTI)

    Kim, G.; Pesaran, A.; Smith, K.; Graf, P.; Jun, M.; Yang, C.; Li, G.; Li, S.; Hochman, A.; Tselepidakis, D.; White, J.

    2014-06-01

    This presentation discusses the significant enhancement of computational efficiency in nonlinear multiscale battery model for computer aided engineering in current research at NREL.

  2. Wild Fire Computer Model Helps Firefighters

    ScienceCinema (OSTI)

    Canfield, Jesse

    2014-06-02

    A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

  3. Practical Use of Computationally Frugal Model Analysis Methods

    SciTech Connect (OSTI)

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugal methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts

  4. Practical Use of Computationally Frugal Model Analysis Methods

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less

  5. COMPUTATIONAL MODELING OF CIRCULATING FLUIDIZED BED REACTORS

    SciTech Connect (OSTI)

    Ibrahim, Essam A

    2013-01-09

    Details of numerical simulations of two-phase gas-solid turbulent flow in the riser section of Circulating Fluidized Bed Reactor (CFBR) using Computational Fluid Dynamics (CFD) technique are reported. Two CFBR riser configurations are considered and modeled. Each of these two riser models consist of inlet, exit, connecting elbows and a main pipe. Both riser configurations are cylindrical and have the same diameter but differ in their inlet lengths and main pipe height to enable investigation of riser geometrical scaling effects. In addition, two types of solid particles are exploited in the solid phase of the two-phase gas-solid riser flow simulations to study the influence of solid loading ratio on flow patterns. The gaseous phase in the two-phase flow is represented by standard atmospheric air. The CFD-based FLUENT software is employed to obtain steady state and transient solutions for flow modulations in the riser. The physical dimensions, types and numbers of computation meshes, and solution methodology utilized in the present work are stated. Flow parameters, such as static and dynamic pressure, species velocity, and volume fractions are monitored and analyzed. The differences in the computational results between the two models, under steady and transient conditions, are compared, contrasted, and discussed.

  6. Computational Science Research in Support of Petascale Electromagnetic Modeling

    SciTech Connect (OSTI)

    Lee, L.-Q.; Akcelik, V; Ge, L; Chen, S; Schussman, G; Candel, A; Li, Z; Xiao, L; Kabel, A; Uplenchwar, R; Ng, C; Ko, K; /SLAC

    2008-06-20

    Computational science research components were vital parts of the SciDAC-1 accelerator project and are continuing to play a critical role in newly-funded SciDAC-2 accelerator project, the Community Petascale Project for Accelerator Science and Simulation (ComPASS). Recent advances and achievements in the area of computational science research in support of petascale electromagnetic modeling for accelerator design analysis are presented, which include shape determination of superconducting RF cavities, mesh-based multilevel preconditioner in solving highly-indefinite linear systems, moving window using h- or p- refinement for time-domain short-range wakefield calculations, and improved scalable application I/O.

  7. Fast, narrow-band computer model for radiation calculations

    SciTech Connect (OSTI)

    Yan, Z.; Holmstedt, G.

    1997-01-01

    A fast, narrow-band computer model, FASTNB, which predicts the radiation intensity in a general nonisothermal and nonhomogeneous combustion environment, has been developed. The spectral absorption coefficients of the combustion products, including carbon dioxide, water vapor, and soot, are calculated based on the narrow-band model. FASTNB provides an accurate calculation at reasonably high speed. Compared with Grosshandler`s narrow-band model, RADCAL, which has been verified quite extensively against experimental measurements, FASTNB is more than 20 times faster and gives almost exactly the same results.

  8. Computational models of intergroup competition and warfare.

    SciTech Connect (OSTI)

    Letendre, Kenneth; Abbott, Robert G.

    2011-11-01

    This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.

  9. ONSET OF CHAOS IN A MODEL OF QUANTUM COMPUTATION (Conference...

    Office of Scientific and Technical Information (OSTI)

    Clearly, if this happens in a quantum computer, it may lead to a destruction of the ... Numerical analysis 2 of a simplest model of quantum computer (2D model of 12-spins with ...

  10. Modeling-Computer Simulations At Northern Basin & Range Region...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Northern Basin & Range Region (Pritchett, 2004) Exploration Activity...

  11. Modeling-Computer Simulations At Central Nevada Seismic Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Central Nevada Seismic Zone Region (Pritchett, 2004) Exploration...

  12. Modeling-Computer Simulations At Geysers Area (Goff & Decker...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Geysers Area (Goff & Decker, 1983) Exploration Activity Details...

  13. Modeling-Computer Simulations At Dixie Valley Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Dixie Valley Geothermal Area (Wisian & Blackwell, 2004) Exploration...

  14. Modeling-Computer Simulations At Raft River Geothermal Area ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Raft River Geothermal Area (1980) Exploration Activity Details...

  15. Modeling-Computer Simulations (Lewicki & Oldenburg, 2004) | Open...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations (Lewicki & Oldenburg, 2004) Exploration Activity Details Location...

  16. Modeling-Computer Simulations At Desert Peak Area (Wisian & Blackwell...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Desert Peak Area (Wisian & Blackwell, 2004) Exploration Activity...

  17. Modeling-Computer Simulations (Combs, Et Al., 1999) | Open Energy...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations (Combs, Et Al., 1999) Exploration Activity Details Location Unspecified...

  18. Modeling-Computer Simulations At Yellowstone Region (Laney, 2005...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Yellowstone Region (Laney, 2005) Exploration Activity Details Location...

  19. Modeling-Computer Simulations At Raft River Geothermal Area ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Raft River Geothermal Area (1979) Exploration Activity Details...

  20. Modeling-Computer Simulations At Raft River Geothermal Area ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Raft River Geothermal Area (1977) Exploration Activity Details...

  1. Modeling-Computer Simulations (Ozkocak, 1985) | Open Energy Informatio...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations (Ozkocak, 1985) Exploration Activity Details Location Unspecified...

  2. Modeling-Computer Simulations At White Mountains Area (Goff ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At White Mountains Area (Goff & Decker, 1983) Exploration Activity...

  3. Modeling-Computer Simulations At Stillwater Area (Wisian & Blackwell...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Stillwater Area (Wisian & Blackwell, 2004) Exploration Activity...

  4. Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal Area (Wilt & Haar, 1986)...

  5. Modeling-Computer Simulations At Dixie Valley Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Dixie Valley Geothermal Area (Kennedy & Soest, 2006) Exploration...

  6. Modeling-Computer Simulations (Ranalli & Rybach, 2005) | Open...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations (Ranalli & Rybach, 2005) Exploration Activity Details Location...

  7. Modeling-Computer Simulations At Raft River Geothermal Area ...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Raft River Geothermal Area (1983) Exploration Activity Details...

  8. Parallel Computation of the Regional Ocean Modeling System (ROMS)

    SciTech Connect (OSTI)

    Wang, P; Song, Y T; Chao, Y; Zhang, H

    2005-04-05

    The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds of processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.

  9. Preliminary Phase Field Computational Model Development

    SciTech Connect (OSTI)

    Li, Yulan; Hu, Shenyang Y.; Xu, Ke; Suter, Jonathan D.; McCloy, John S.; Johnson, Bradley R.; Ramuhalli, Pradeep

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in

  10. Wind energy conversion system analysis model (WECSAM) computer program documentation

    SciTech Connect (OSTI)

    Downey, W T; Hendrick, P L

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation. Thus, any user-supplied data for WECS performance, application load, utility rates, or wind resource may be entered into the scratch file to override the default data-base value. After the model and the inputs required from the user and derived from the data base are described, the model output and the various output options that can be exercised by the user are detailed. The general operation is set forth and suggestions are made for efficient modes of operation. Sample listings of various input, output, and data-base files are appended. (LEW)

  11. Computational model of miniature pulsating heat pipes.

    SciTech Connect (OSTI)

    Martinez, Mario J.; Givler, Richard C.

    2013-01-01

    The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative three-year project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal ground-plane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.

  12. Predictive Capability Maturity Model for computational modeling and simulation.

    SciTech Connect (OSTI)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  13. Review of computational thermal-hydraulic modeling

    SciTech Connect (OSTI)

    Keefer, R.H.; Keeton, L.W.

    1995-12-31

    Corrosion of heat transfer tubing in nuclear steam generators has been a persistent problem in the power generation industry, assuming many different forms over the years depending on chemistry and operating conditions. Whatever the corrosion mechanism, a fundamental understanding of the process is essential to establish effective management strategies. To gain this fundamental understanding requires an integrated investigative approach that merges technology from many diverse scientific disciplines. An important aspect of an integrated approach is characterization of the corrosive environment at high temperature. This begins with a thorough understanding of local thermal-hydraulic conditions, since they affect deposit formation, chemical concentration, and ultimately corrosion. Computational Fluid Dynamics (CFD) can and should play an important role in characterizing the thermal-hydraulic environment and in predicting the consequences of that environment,. The evolution of CFD technology now allows accurate calculation of steam generator thermal-hydraulic conditions and the resulting sludge deposit profiles. Similar calculations are also possible for model boilers, so that tests can be designed to be prototypic of the heat exchanger environment they are supposed to simulate. This paper illustrates the utility of CFD technology by way of examples in each of these two areas. This technology can be further extended to produce more detailed local calculations of the chemical environment in support plate crevices, beneath thick deposits on tubes, and deep in tubesheet sludge piles. Knowledge of this local chemical environment will provide the foundation for development of mechanistic corrosion models, which can be used to optimize inspection and cleaning schedules and focus the search for a viable fix.

  14. Modeling of Geothermal Reservoirs: Fundamental Processes, Computer...

    Open Energy Info (EERE)

    of Geothermal Reservoirs: Fundamental Processes, Computer Simulation and Field Applications Jump to: navigation, search OpenEI Reference LibraryAdd to library Journal Article:...

  15. Modeling-Computer Simulations At Dixie Valley Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Dixie Valley Geothermal Area (Wannamaker, Et Al., 2006) Exploration...

  16. Modeling-Computer Simulations At Obsidian Cliff Area (Hulen,...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Obsidian Cliff Area (Hulen, Et Al., 2003) Exploration Activity Details...

  17. Modeling-Computer Simulations At Walker-Lane Transitional Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Walker-Lane Transitional Zone Region (Laney, 2005) Exploration...

  18. Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal Area (Roberts, Et Al., 1995)...

  19. Modeling-Computer Simulations At Long Valley Caldera Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Pribnow, Et Al., 2003)...

  20. Modeling-Computer Simulations At Hawthorne Area (Lazaro, Et Al...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Hawthorne Area (Lazaro, Et Al., 2010) Exploration Activity Details...

  1. Modeling-Computer Simulations At Walker-Lane Transitional Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Walker-Lane Transitional Zone Region (Pritchett, 2004) Exploration...

  2. Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area (Brown & DuTeaux, 1997) Exploration...

  3. Modeling-Computer Simulations At Coso Geothermal Area (1980)...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Coso Geothermal Area (1980) Exploration Activity Details Location Coso...

  4. Modeling-Computer Simulations At Long Valley Caldera Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Newman, Et Al., 2006) Exploration...

  5. Scientists use world's fastest computer to model materials under...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Materials under extreme conditions Scientists use world's fastest computer to model materials under extreme conditions Materials scientists are for the first time attempting to...

  6. Modeling-Computer Simulations At The Needles Area (Bell & Ramelli...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At The Needles Area (Bell & Ramelli, 2009) Exploration Activity Details...

  7. Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area (Goff & Decker, 1983) Exploration...

  8. Modeling-Computer Simulations At Long Valley Caldera Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Farrar, Et Al., 2003) Exploration...

  9. Modeling-Computer Simulations At Central Nevada Seismic Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Central Nevada Seismic Zone Region (Biasi, Et Al., 2009) Exploration...

  10. Modeling-Computer Simulations At Valles Caldera - Sulphur Springs...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Sulphur Springs Geothermal Area (Roberts, Et Al.,...

  11. Modeling-Computer Simulations At Nw Basin & Range Region (Pritchett...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Nw Basin & Range Region (Pritchett, 2004) Exploration Activity Details...

  12. LANL researchers use computer modeling to study HIV | National...

    National Nuclear Security Administration (NNSA)

    researchers use computer modeling to study HIV | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing...

  13. Modeling-Computer Simulations At Long Valley Caldera Geothermal...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Tempel, Et Al., 2011) Exploration...

  14. Modeling-Computer Simulations At Nw Basin & Range Region (Biasi...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Nw Basin & Range Region (Biasi, Et Al., 2009) Exploration Activity...

  15. Modeling-Computer Simulations At Coso Geothermal Area (2000)...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Coso Geothermal Area (2000) Exploration Activity Details Location Coso...

  16. Modeling-Computer Simulations At Northern Basin & Range Region...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Northern Basin & Range Region (Biasi, Et Al., 2009) Exploration...

  17. Modeling-Computer Simulations At Valles Caldera - Sulphur Springs...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Sulphur Springs Geothermal Area (Wilt & Haar, 1986)...

  18. Modeling-Computer Simulations At Akutan Fumaroles Area (Kolker...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Akutan Fumaroles Area (Kolker, Et Al., 2010) Exploration Activity...

  19. Modeling-Computer Simulations At Walker-Lane Transitional Zone...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Walker-Lane Transitional Zone Region (Biasi, Et Al., 2009) Exploration...

  20. Modeling-Computer Simulations At Coso Geothermal Area (1999)...

    Open Energy Info (EERE)

    navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Coso Geothermal Area (1999) Exploration Activity Details Location Coso...

  1. Modeling-Computer Simulations At Fish Lake Valley Area (Deymonaz...

    Open Energy Info (EERE)

    Fish Lake Valley Area (Deymonaz, Et Al., 2008) Jump to: navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Fish Lake Valley...

  2. Modeling-Computer Simulations At Nevada Test And Training Range...

    Open Energy Info (EERE)

    Nevada Test And Training Range Area (Sabin, Et Al., 2004) Jump to: navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Nevada...

  3. Martin Karplus and Computer Modeling for Chemical Systems

    Office of Scientific and Technical Information (OSTI)

    Information Additional information about Martin Karplus, computer modeling, and chemical systems is available in electronic documents and on the Web. Documents: Comparison of 3D...

  4. New partnership uses advanced computer science modeling to address...

    National Nuclear Security Administration (NNSA)

    New partnership uses advanced computer science modeling to address climate change Friday, August 29, 2014 - 10:26am Several national laboratories and institutions have joined ...

  5. Unsolicited Projects in 2012: Research in Computer Architecture, Modeling,

    Office of Science (SC) Website

    and Evolving MPI for Exascale | U.S. DOE Office of Science (SC) 2: Research in Computer Architecture, Modeling, and Evolving MPI for Exascale Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities

  6. Ambient temperature modelling with soft computing techniques

    SciTech Connect (OSTI)

    Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni; De Felice, Matteo

    2010-07-15

    This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)

  7. Computational Fluid Dynamics Modeling of Diesel Engine Combustion and

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Emissions | Department of Energy Computational Fluid Dynamics Modeling of Diesel Engine Combustion and Emissions Computational Fluid Dynamics Modeling of Diesel Engine Combustion and Emissions 2005 Diesel Engine Emissions Reduction (DEER) Conference Presentations and Posters 2005_deer_reitz.pdf (682.47 KB) More Documents & Publications Experiments and Modeling of Two-Stage Combustion in Low-Emissions Diesel Engines Comparison of Conventional Diesel and Reactivity Controlled Compression

  8. Modeling-Computer Simulations | Open Energy Information

    Open Energy Info (EERE)

    the risk of inaccurate predictions.1 Potential Pitfalls Uncertainties in initial reservoir conditions and other model inputs can cause inaccuracies in simulations, which...

  9. Computational Model of Magnesium Deposition and Dissolution for Property

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Determination via Cyclic Voltammetry - Joint Center for Energy Storage Research June 23, 2016, Research Highlights Computational Model of Magnesium Deposition and Dissolution for Property Determination via Cyclic Voltammetry Top: Example distributions of the charge transfer coefficient and standard heterogeneous rate constant, obtained from fitting Bottom: Comparison between experimental and simulated voltammograms, demonstrating good agreement Scientific Achievement A computationally

  10. Computational Modeling for the American Chemical Society | GE...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computational Modeling for the American Chemical Society Click to email this to a friend (Opens in new window) Share on Facebook (Opens in new window) Click to share (Opens in new...

  11. Scientists model brain structure to help computers recognize...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The team tried developing a computer model based on human neural structure and function, ... Introspectively, we know that the human brain solves this problem very well. We only have ...

  12. Bayesian approaches for combining computational model output and physical

    Office of Scientific and Technical Information (OSTI)

    observations (Conference) | SciTech Connect Bayesian approaches for combining computational model output and physical observations Citation Details In-Document Search Title: Bayesian approaches for combining computational model output and physical observations Authors: Higdon, David M [1] ; Lawrence, Earl [1] ; Heitmann, Katrin [2] ; Habib, Salman [2] + Show Author Affiliations Los Alamos National Laboratory ANL Publication Date: 2011-07-25 OSTI Identifier: 1084581 Report Number(s):

  13. Computer modeling reveals how surprisingly potent hepatitis C drug works

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Hepatitis C computer modeling Computer modeling reveals how surprisingly potent hepatitis C drug works A study reveals how daclatasvir targets one of its proteins and causes the fastest viral decline ever seen with anti-HCV drugs - within 12 hours of treatment. February 19, 2013 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy

  14. Use Computational Model to Design and Optimize Welding Conditions to

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Suppress Helium Cracking during Welding | Department of Energy Use Computational Model to Design and Optimize Welding Conditions to Suppress Helium Cracking during Welding Use Computational Model to Design and Optimize Welding Conditions to Suppress Helium Cracking during Welding Today, welding is widely used for repair, maintenance and upgrade of nuclear reactor components. As a critical technology to extend the service life of nuclear power plants beyond 60 years, weld technology must be

  15. Computational social network modeling of terrorist recruitment.

    SciTech Connect (OSTI)

    Berry, Nina M.; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.

    2004-10-01

    The Seldon terrorist model represents a multi-disciplinary approach to developing organization software for the study of terrorist recruitment and group formation. The need to incorporate aspects of social science added a significant contribution to the vision of the resulting Seldon toolkit. The unique addition of and abstract agent category provided a means for capturing social concepts like cliques, mosque, etc. in a manner that represents their social conceptualization and not simply as a physical or economical institution. This paper provides an overview of the Seldon terrorist model developed to study the formation of cliques, which are used as the major recruitment entity for terrorist organizations.

  16. A system analysis computer model for the High Flux Isotope Reactor (HFIRSYS Version 1)

    SciTech Connect (OSTI)

    Sozer, M.C.

    1992-04-01

    A system transient analysis computer model (HFIRSYS) has been developed for analysis of small break loss of coolant accidents (LOCA) and operational transients. The computer model is based on the Advanced Continuous Simulation Language (ACSL) that produces the FORTRAN code automatically and that provides integration routines such as the Gear`s stiff algorithm as well as enabling users with numerous practical tools for generating Eigen values, and providing debug outputs and graphics capabilities, etc. The HFIRSYS computer code is structured in the form of the Modular Modeling System (MMS) code. Component modules from MMS and in-house developed modules were both used to configure HFIRSYS. A description of the High Flux Isotope Reactor, theoretical bases for the modeled components of the system, and the verification and validation efforts are reported. The computer model performs satisfactorily including cases in which effects of structural elasticity on the system pressure is significant; however, its capabilities are limited to single phase flow. Because of the modular structure, the new component models from the Modular Modeling System can easily be added to HFIRSYS for analyzing their effects on system`s behavior. The computer model is a versatile tool for studying various system transients. The intent of this report is not to be a users manual, but to provide theoretical bases and basic information about the computer model and the reactor.

  17. Model Guidelines for Including Energy Efficiency and Renewable Energy Into State Energy Emergency Plans

    SciTech Connect (OSTI)

    1999-09-01

    These model guidelines can serve as a planning guide for state and local emergency planners. It is intended to supplement existing energy emergency management plans.

  18. New Computer Model Pinpoints Prime Materials for Carbon Capture

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer Model Pinpoints Prime Materials for Carbon Capture New Computer Model Pinpoints Prime Materials for Carbon Capture July 17, 2012 NERSC Contact: Linda Vu, lvu@lbl.gov, +1 510 495 2402 UC Berkeley Contact: Robert Sanders, rsanders@berkeley.edu zeolite350.jpg One of the 50 best zeolite structures for capturing carbon dioxide. Zeolite is a porous solid made of silicon dioxide, or quartz. In the model, the red balls are oxygen, the tan balls are silicon. The blue-green area is where carbon

  19. Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect (OSTI)

    Weinan E

    2012-03-29

    The main bottleneck in modeling transport in molecular devices is to develop the correct formulation of the problem and efficient algorithms for analyzing the electronic structure and dynamics using, for example, the time-dependent density functional theory. We have divided this task into several steps. The first step is to developing the right mathematical formulation and numerical algorithms for analyzing the electronic structure using density functional theory. The second step is to study time-dependent density functional theory, particularly the far-field boundary conditions. The third step is to study electronic transport in molecular devices. We are now at the end of the first step. Under DOE support, we have made subtantial progress in developing linear scaling and sub-linear scaling algorithms for electronic structure analysis. Although there has been a huge amount of effort in the past on developing linear scaling algorithms, most of the algorithms developed suffer from the lack of robustness and controllable accuracy. We have made the following progress: (1) We have analyzed thoroughly the localization properties of the wave-functions. We have developed a clear understanding of the physical as well as mathematical origin of the decay properties. One important conclusion is that even for metals, one can choose wavefunctions that decay faster than any algebraic power. (2) We have developed algorithms that make use of these localization properties. Our algorithms are based on non-orthogonal formulations of the density functional theory. Our key contribution is to add a localization step into the algorithm. The addition of this localization step makes the algorithm quite robust and much more accurate. Moreover, we can control the accuracy of these algorithms by changing the numerical parameters. (3) We have considerably improved the Fermi operator expansion (FOE) approach. Through pole expansion, we have developed the optimal scaling FOE algorithm.

  20. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect (OSTI)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  1. Draft: Modeling Two-Phase Flow in Porous Media Including Fluid-Fluid Interfacial Area

    SciTech Connect (OSTI)

    Crandall, Dustin; Niessner, Jennifer; Hassanizadeh, S Majid

    2008-01-01

    We present a new numerical model for macro-scale twophase flow in porous media which is based on a physically consistent theory of multi-phase flow.The standard approach for modeling the flow of two fluid phases in a porous medium consists of a continuity equation for each phase, an extended form of Darcy’s law as well as constitutive relationships for relative permeability and capillary pressure. This approach is known to have a number of important shortcomings and, in particular, it does not account for the presence and role of fluid - fluid interfaces. An alternative is to use an extended model which is founded on thermodynamic principles and is physically consistent. In addition to the standard equations, the model uses a balance equation for specific interfacial area. The constitutive relationship for capillary pressure involves not only saturation, but also specific interfacial area. We show how parameters can be obtained for the alternative model using experimental data from a new kind of flow cell and present results of a numerical modeling study

  2. New partnership uses advanced computer science modeling to address climate

    National Nuclear Security Administration (NNSA)

    change | National Nuclear Security Administration | (NNSA) partnership uses advanced computer science modeling to address climate change Friday, August 29, 2014 - 10:26am Several national laboratories and institutions have joined forces to develop and apply the most complete climate and Earth system model to address the most challenging and demanding climate change issues. Accelerated Climate Modeling for Energy, or ACME, is designed to accelerate the development and application of fully

  3. Dusty Plasma Modeling of the Fusion Reactor Sheath Including Collisional-Radiative Effects

    SciTech Connect (OSTI)

    Dezairi, Aouatif; Samir, Mhamed; Eddahby, Mohamed; Saifaoui, Dennoun; Katsonis, Konstantinos; Berenguer, Chloe

    2008-09-07

    The structure and the behavior of the sheath in Tokamak collisional plasmas has been studied. The sheath is modeled taking into account the presence of the dust{sup 2} and the effects of the charged particle collisions and radiative processes. The latter may allow for optical diagnostics of the plasma.

  4. The island coalescence problem: Scaling of reconnection in extended fluid models including higher-order moments

    SciTech Connect (OSTI)

    Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; Bhattacharjee, A.; Stanier, Adam; Daughton, William; Wang, Liang; Germaschewski, Kai

    2015-11-05

    As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment model with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.

  5. The island coalescence problem: Scaling of reconnection in extended fluid models including higher-order moments

    SciTech Connect (OSTI)

    Ng, Jonathan; Huang, Yi-Min; Hakim, Ammar; Bhattacharjee, A.; Stanier, Adam; Daughton, William; Wang, Liang; Germaschewski, Kai

    2015-11-15

    As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Recently, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment model with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.

  6. The island coalescence problem: Scaling of reconnection in extended fluid models including higher-order moments

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; Bhattacharjee, A.; Stanier, Adam; Daughton, William; Wang, Liang; Germaschewski, Kai

    2015-11-05

    As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less

  7. A Variable Refrigerant Flow Heat Pump Computer Model in EnergyPlus

    SciTech Connect (OSTI)

    Raustad, Richard A.

    2013-01-01

    This paper provides an overview of the variable refrigerant flow heat pump computer model included with the Department of Energy's EnergyPlusTM whole-building energy simulation software. The mathematical model for a variable refrigerant flow heat pump operating in cooling or heating mode, and a detailed model for the variable refrigerant flow direct-expansion (DX) cooling coil are described in detail.

  8. Systems, methods and computer-readable media to model kinetic performance of rechargeable electrochemical devices

    DOE Patents [OSTI]

    Gering, Kevin L.

    2013-01-01

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a Butler-Volmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulse-time dependence, electrode surface availability, or a combination thereof. A set of sigmoid-based expressions may be included with the modified-BV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modified-BV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.

  9. Interpretation of thermoreflectance measurements with a two-temperature model including non-surface heat deposition

    SciTech Connect (OSTI)

    Regner, K. T.; Wei, L. C.; Malen, J. A.

    2015-12-21

    We develop a solution to the two-temperature diffusion equation in axisymmetric cylindrical coordinates to model heat transport in thermoreflectance experiments. Our solution builds upon prior solutions that account for two-channel diffusion in each layer of an N-layered geometry, but adds the ability to deposit heat at any location within each layer. We use this solution to account for non-surface heating in the transducer layer of thermoreflectance experiments that challenge the timescales of electron-phonon coupling. A sensitivity analysis is performed to identify important parameters in the solution and to establish a guideline for when to use the two-temperature model to interpret thermoreflectance data. We then fit broadband frequency domain thermoreflectance (BB-FDTR) measurements of SiO{sub 2} and platinum at a temperature of 300 K with our two-temperature solution to parameterize the gold/chromium transducer layer. We then refit BB-FDTR measurements of silicon and find that accounting for non-equilibrium between electrons and phonons in the gold layer does lessen the previously observed heating frequency dependence reported in Regner et al. [Nat. Commun. 4, 1640 (2013)] but does not completely eliminate it. We perform BB-FDTR experiments on silicon with an aluminum transducer and find limited heating frequency dependence, in agreement with time domain thermoreflectance results. We hypothesize that the discrepancy between thermoreflectance measurements with different transducers results in part from spectrally dependent phonon transmission at the transducer/silicon interface.

  10. Computer Modeling of Carbon Metabolism Enables Biofuel Engineering (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2011-09-01

    In an effort to reduce the cost of biofuels, the National Renewable Energy Laboratory (NREL) has merged biochemistry with modern computing and mathematics. The result is a model of carbon metabolism that will help researchers understand and engineer the process of photosynthesis for optimal biofuel production.

  11. Computer Modeling of Saltstone Landfills by Intera Environmental Consultants

    SciTech Connect (OSTI)

    Albenesius, E.L.

    2001-08-09

    This report summaries the computer modeling studies and how the results of these studies were used to estimate contaminant releases to the groundwater. These modeling studies were used to improve saltstone landfill designs and are the basis for the current reference design. With the reference landfill design, EPA Drinking Water Standards can be met for all chemicals and radionuclides contained in Savannah River Plant waste salts.

  12. Scientists use world's fastest computer to model materials under extreme

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    conditions Materials under extreme conditions Scientists use world's fastest computer to model materials under extreme conditions Materials scientists are for the first time attempting to create atomic-scale models that describe how voids are created, grow, and merge. October 30, 2009 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable

  13. Systems, methods and computer-readable media for modeling cell performance fade of rechargeable electrochemical devices

    DOE Patents [OSTI]

    Gering, Kevin L

    2013-08-27

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constant-current pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constant-current pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.

  14. A New Perspective for the Calibration of Computational Predictor Models.

    SciTech Connect (OSTI)

    Crespo, Luis Guillermo

    2014-11-01

    This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

  15. The origins of computer weather prediction and climate modeling

    SciTech Connect (OSTI)

    Lynch, Peter [Meteorology and Climate Centre, School of Mathematical Sciences, University College Dublin, Belfield (Ireland)], E-mail: Peter.Lynch@ucd.ie

    2008-03-20

    Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.

  16. Final Report: Center for Programming Models for Scalable Parallel Computing

    SciTech Connect (OSTI)

    Mellor-Crummey, John

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  17. Multiscale Modeling of Malaria | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Malaria Authors: Karniadakis, G.E., Parasitic infectious diseases like malaria and certain hereditary hematologic disorders are often associated with major changes in the shape and viscoelastic properties of red blood cells. Such changes can disrupt blood flow and, possibly, brain perfusion, as in the case of cerebral malaria. In recent work on stochastic multiscale models-in conjunction with large-scale parallel computing-we were able to quantify, for the first time, the main biophysical

  18. Advanced Reactor Thermal Hydraulic Modeling | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility Temperature distribution illustrating thermal striping in a T-junction. Computed on Intrepid with Nek5000 and visualized on Eureka with VisIt at the ALCF. Paul Fischer (ANL), Aleks Obabko (ANL), and Hank Childs (LBNL) Advanced Reactor Thermal Hydraulic Modeling PI Name: Paul Fischer PI Email: fischer@mcs.anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 25 Million Year: 2012 Research Domain: Energy Technologies The DOE Nuclear

  19. Natural Abundance 17O Nuclear Magnetic Resonance and Computational Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Studies of Lithium Based Liquid Electrolytes - Joint Center for Energy Storage Research March 14, 2015, Research Highlights Natural Abundance 17O Nuclear Magnetic Resonance and Computational Modeling Studies of Lithium Based Liquid Electrolytes (Top) Example of natural abundance 17O NMR spectra of LiTFSI in mixture of EC, PC and EMC (4:1:5 by weight). (Bottom) The solvation structure of LiTFSI derived from the results obtained by both NMR and quantum chemistry calculations Scientific

  20. Martin Karplus and Computer Modeling for Chemical Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Martin Karplus and Computer Modeling for Chemical Systems Resources with Additional Information * Karplus Equation Martin Karplus ©Portrait by N. Pitt, 9/10/03 Martin Karplus, the Theodore William Richards Professor of Chemistry Emeritus at Harvard, is one of three winners of the 2013 Nobel Prize in chemistry... The 83-year-old Vienna-born theoretical chemist, who is also affiliated with the Université de Strasbourg, Strasbourg, France, is a 1951 graduate of Harvard College and earned his

  1. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  2. ONSET OF CHAOS IN A MODEL OF QUANTUM COMPUTATION G. BERMAN; ET...

    Office of Scientific and Technical Information (OSTI)

    OF CHAOS IN A MODEL OF QUANTUM COMPUTATION G. BERMAN; ET AL 71 CLASSICAL AND QUANTUM MECHANICS, GENERAL PHYSICS; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, AND...

  3. Computer Modeling of Violent Intent: A Content Analysis Approach

    SciTech Connect (OSTI)

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  4. Models the Electromagnetic Response of a 3D Distribution using MP COMPUTERS

    Energy Science and Technology Software Center (OSTI)

    1999-05-01

    EM3D models the electromagnetic response of a 3D distribution of conductivity, dielectric permittivity and magnetic permeability within the earth for geophysical applications using massively parallel computers. The simulations are carried out in the frequency domain for either electric or magnetic sources for either scattered or total filed formulations of Maxwell''s equations. The solution is based on the method of finite differences and includes absorbing boundary conditions so that responses can be modeled up into themore » radar range where wave propagation is dominant. Recent upgrades in the software include the incorporation of finite size sources, that in addition to dipolar source fields, and a low induction number preconditioner that can significantly reduce computational run times. A graphical user interface (GUI) is bundled with the software so that complicated 3D models can be easily constructed and simulated with the software. The GUI also allows for plotting of the output.« less

  5. Final Report for Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect (OSTI)

    Glotzer, Sharon C.

    2013-08-28

    In collaboration with researchers at Vanderbilt University, North Carolina State University, Princeton and Oakridge National Laboratory we developed multiscale modeling and simulation methods capable of modeling the synthesis, assembly, and operation of molecular electronics devices. Our role in this project included the development of coarse-grained molecular and mesoscale models and simulation methods capable of simulating the assembly of millions of organic conducting molecules and other molecular components into nanowires, crossbars, and other organized patterns.

  6. Computer-Aided Construction of Chemical Kinetic Models

    SciTech Connect (OSTI)

    Green, William H.

    2014-12-31

    The combustion chemistry of even simple fuels can be extremely complex, involving hundreds or thousands of kinetically significant species. The most reasonable way to deal with this complexity is to use a computer not only to numerically solve the kinetic model, but also to construct the kinetic model in the first place. Because these large models contain so many numerical parameters (e.g. rate coefficients, thermochemistry) one never has sufficient data to uniquely determine them all experimentally. Instead one must work in predictive mode, using theoretical rather than experimental values for many of the numbers in the model, and as appropriate refining the most sensitive numbers through experiments. Predictive chemical kinetics is exactly what is needed for computer-aided design of combustion systems based on proposed alternative fuels, particularly for early assessment of the value and viability of proposed new fuels before those fuels are commercially available. This project was aimed at making accurate predictive chemical kinetics practical; this is a challenging goal which requires a range of science advances. The project spanned a wide range from quantum chemical calculations on individual molecules and elementary-step reactions, through the development of improved rate/thermo calculation procedures, the creation of algorithms and software for constructing and solving kinetic simulations, the invention of methods for model-reduction while maintaining error control, and finally comparisons with experiment. Many of the parameters in the models were derived from quantum chemistry calculations, and the models were compared with experimental data measured in our lab or in collaboration with others.

  7. Computational modeling for hexcan failure under core distruptive accidental conditions

    SciTech Connect (OSTI)

    Sawada, T.; Ninokata, H.; Shimizu, A.

    1995-09-01

    This paper describes the development of computational modeling for hexcan wall failures under core disruptive accident conditions of fast breeder reactors. A series of out-of-pile experiments named SIMBATH has been analyzed by using the SIMMER-II code. The SIMBATH experiments were performed at KfK in Germany. The experiments used a thermite mixture to simulate fuel. The test geometry of SIMBATH ranged from single pin to 37-pin bundles. In this study, phenomena of hexcan wall failure found in a SIMBATH test were analyzed by SIMMER-II. Although the original model of SIMMER-II did not calculate any hexcan failure, several simple modifications made it possible to reproduce the hexcan wall melt-through observed in the experiment. In this paper the modifications and their significance are discussed for further modeling improvements.

  8. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    SciTech Connect (OSTI)

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  9. Computational method and system for modeling, analyzing, and optimizing DNA amplification and synthesis

    DOE Patents [OSTI]

    Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.

    2010-05-04

    A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

  10. Compensator models for fluence field modulated computed tomography

    SciTech Connect (OSTI)

    Bartolac, Steven; Jaffray, David; Radiation Medicine Program, Princess Margaret Hospital Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9

    2013-12-15

    Purpose: Fluence field modulated computed tomography (FFMCT) presents a novel approach for acquiring CT images, whereby a patient model guides dynamically changing fluence patterns in an attempt to achieve task-based, user-prescribed, regional variations in image quality, while also controlling dose to the patient. This work aims to compare the relative effectiveness of FFMCT applied to different thoracic imaging tasks (routine diagnostic CT, lung cancer screening, and cardiac CT) when the modulator is subject to limiting constraints, such as might be present in realistic implementations.Methods: An image quality plan was defined for a simulated anthropomorphic chest slice, including regions of high and low image quality, for each of the thoracic imaging tasks. Modulated fluence patterns were generated using a simulated annealing optimization script, which attempts to achieve the image quality plan under a global dosimetric constraint. Optimization was repeated under different types of modulation constraints (e.g., fixed or gantry angle dependent patterns, continuous or comprised of discrete apertures) with the most limiting case being a fixed conventional bowtie filter. For each thoracic imaging task, an image quality map (IQM{sub sd}) representing the regionally varying standard deviation is predicted for each modulation method and compared to the prescribed image quality plan as well as against results from uniform fluence fields. Relative integral dose measures were also compared.Results: Each IQM{sub sd} resulting from FFMCT showed improved agreement with planned objectives compared to those from uniform fluence fields for all cases. Dynamically changing modulation patterns yielded better uniformity, improved image quality, and lower dose compared to fixed filter patterns with optimized tube current. For the latter fixed filter cases, the optimal choice of tube current modulation was found to depend heavily on the task. Average integral dose reduction compared

  11. Theoretical modeling of UV-Vis absorption and emission spectra in liquid state systems including vibrational and conformational effects: Explicit treatment of the vibronic transitions

    SciTech Connect (OSTI)

    D’Abramo, Marco; Dipartimento di Chimica, Universitá Sapienza, P.le Aldo Moro, 5, 00185, Rome ; Aschi, Massimiliano; Amadei, Andrea

    2014-04-28

    Here, we extend a recently introduced theoretical-computational procedure [M. D’Alessandro, M. Aschi, C. Mazzuca, A. Palleschi, and A. Amadei, J. Chem. Phys. 139, 114102 (2013)] to include quantum vibrational transitions in modelling electronic spectra of atomic molecular systems in condensed phase. The method is based on the combination of Molecular Dynamics simulations and quantum chemical calculations within the Perturbed Matrix Method approach. The main aim of the presented approach is to reproduce as much as possible the spectral line shape which results from a subtle combination of environmental and intrinsic (chromophore) mechanical-dynamical features. As a case study, we were able to model the low energy UV-vis transitions of pyrene in liquid acetonitrile in good agreement with the experimental data.

  12. PACKAGE INCLUDES:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    PACKAGE INCLUDES: Airfare from Seattle, 4 & 5 Star Hotels, Transfers, Select Meals, Guided Tours and Excursions DAY 01: BANGKOK - ARRIVAL DAY 02: BANGKOK - SIGHTSEEING DAY 03: BANGKOK - FLOATING MARKET DAY 04: BANGKOK - AT LEISURE DAY 05: BANGKOK - CHIANG MAI BY AIR DAY 06: CHIANG MAI - SIGHTSEEING DAY 07: CHIANG MAI - ELEPHANT CAMP DAY 08: CHIANG MAI - PHUKET BY AIR DAY 09: PHUKET - PHI PHI ISLAND BY FERRY DAY 10: PHUKET - AT LEISURE DAY 11: PHUKET - CORAL ISLAND BY SPEEDBOAT DAY 12: PHUKET

  13. Modeling the Fracture of Ice Sheets on Parallel Computers

    SciTech Connect (OSTI)

    Waisman, Haim; Tuminaro, Ray

    2013-10-10

    The objective of this project was to investigate the complex fracture of ice and understand its role within larger ice sheet simulations and global climate change. This objective was achieved by developing novel physics based models for ice, novel numerical tools to enable the modeling of the physics and by collaboration with the ice community experts. At the present time, ice fracture is not explicitly considered within ice sheet models due in part to large computational costs associated with the accurate modeling of this complex phenomena. However, fracture not only plays an extremely important role in regional behavior but also influences ice dynamics over much larger zones in ways that are currently not well understood. To this end, our research findings through this project offers significant advancement to the field and closes a large gap of knowledge in understanding and modeling the fracture of ice sheets in the polar regions. Thus, we believe that our objective has been achieved and our research accomplishments are significant. This is corroborated through a set of published papers, posters and presentations at technical conferences in the field. In particular significant progress has been made in the mechanics of ice, fracture of ice sheets and ice shelves in polar regions and sophisticated numerical methods that enable the solution of the physics in an efficient way.

  14. Computational fluid dynamic modeling of fluidized-bed polymerization reactors

    SciTech Connect (OSTI)

    Rokkam, Ram

    2012-01-01

    Polyethylene is one of the most widely used plastics, and over 60 million tons are produced worldwide every year. Polyethylene is obtained by the catalytic polymerization of ethylene in gas and liquid phase reactors. The gas phase processes are more advantageous, and use fluidized-bed reactors for production of polyethylene. Since they operate so close to the melting point of the polymer, agglomeration is an operational concern in all slurry and gas polymerization processes. Electrostatics and hot spot formation are the main factors that contribute to agglomeration in gas-phase processes. Electrostatic charges in gas phase polymerization fluidized bed reactors are known to influence the bed hydrodynamics, particle elutriation, bubble size, bubble shape etc. Accumulation of electrostatic charges in the fluidized-bed can lead to operational issues. In this work a first-principles electrostatic model is developed and coupled with a multi-fluid computational fluid dynamic (CFD) model to understand the effect of electrostatics on the dynamics of a fluidized-bed. The multi-fluid CFD model for gas-particle flow is based on the kinetic theory of granular flows closures. The electrostatic model is developed based on a fixed, size-dependent charge for each type of particle (catalyst, polymer, polymer fines) phase. The combined CFD model is first verified using simple test cases, validated with experiments and applied to a pilot-scale polymerization fluidized-bed reactor. The CFD model reproduced qualitative trends in particle segregation and entrainment due to electrostatic charges observed in experiments. For the scale up of fluidized bed reactor, filtered models are developed and implemented on pilot scale reactor.

  15. Computational fluid dynamics modeling of proton exchange membrane fuel cells

    SciTech Connect (OSTI)

    UM,SUKKEE; WANG,C.Y.; CHEN,KEN S.

    2000-02-11

    A transient, multi-dimensional model has been developed to simulate proton exchange membrane (PEM) fuel cells. The model accounts simultaneously for electrochemical kinetics, current distribution, hydrodynamics and multi-component transport. A single set of conservation equations valid for flow channels, gas-diffusion electrodes, catalyst layers and the membrane region are developed and numerically solved using a finite-volume-based computational fluid dynamics (CFD) technique. The numerical model is validated against published experimental data with good agreement. Subsequently, the model is applied to explore hydrogen dilution effects in the anode feed. The predicted polarization cubes under hydrogen dilution conditions are found to be in qualitative agreement with recent experiments reported in the literature. The detailed two-dimensional electrochemical and flow/transport simulations further reveal that in the presence of hydrogen dilution in the fuel stream, hydrogen is depleted at the reaction surface resulting in substantial kinetic polarization and hence a lower current density that is limited by hydrogen transport from the fuel stream to the reaction site.

  16. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing and Storage Requirements Computing and Storage Requirements for FES J. Candy General Atomics, San Diego, CA Presented at DOE Technical Program Review Hilton Washington DC/Rockville Rockville, MD 19-20 March 2013 2 Computing and Storage Requirements Drift waves and tokamak plasma turbulence Role in the context of fusion research * Plasma performance: In tokamak plasmas, performance is limited by turbulent radial transport of both energy and particles. * Gradient-driven: This turbulent

  17. Computational model for simulation small testing launcher, technical solution

    SciTech Connect (OSTI)

    Chelaru, Teodor-Viorel; Cristian, Barbu; Chelaru, Adrian

    2014-12-10

    The purpose of this paper is to present some aspects regarding the computational model and technical solutions for multistage suborbital launcher for testing (SLT) used to test spatial equipment and scientific measurements. The computational model consists in numerical simulation of SLT evolution for different start conditions. The launcher model presented will be with six degrees of freedom (6DOF) and variable mass. The results analysed will be the flight parameters and ballistic performances. The discussions area will focus around the technical possibility to realize a small multi-stage launcher, by recycling military rocket motors. From technical point of view, the paper is focused on national project 'Suborbital Launcher for Testing' (SLT), which is based on hybrid propulsion and control systems, obtained through an original design. Therefore, while classical suborbital sounding rockets are unguided and they use as propulsion solid fuel motor having an uncontrolled ballistic flight, SLT project is introducing a different approach, by proposing the creation of a guided suborbital launcher, which is basically a satellite launcher at a smaller scale, containing its main subsystems. This is why the project itself can be considered an intermediary step in the development of a wider range of launching systems based on hybrid propulsion technology, which may have a major impact in the future European launchers programs. SLT project, as it is shown in the title, has two major objectives: first, a short term objective, which consists in obtaining a suborbital launching system which will be able to go into service in a predictable period of time, and a long term objective that consists in the development and testing of some unconventional sub-systems which will be integrated later in the satellite launcher as a part of the European space program. This is why the technical content of the project must be carried out beyond the range of the existing suborbital vehicle

  18. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  19. Computer modeling of active experiments in space plasmas

    SciTech Connect (OSTI)

    Bollens, R.J.

    1993-01-01

    The understanding of space plasmas is expanding rapidly. This is, in large part, due to the ambitious efforts of scientists from around the world who are performing large scale active experiments in the space plasma surrounding the earth. One such effort was designated the Active Magnetospheric Particle Tracer Explorers (AMPTE) and consisted of a series of plasma releases that were completed during 1984 and 1985. What makes the AMPTE experiments particularly interesting was the occurrence of a dramatic anomaly that was completely unpredicted. During the AMPTE experiment, three satellites traced the solar-wind flow into the earth's magnetosphere. One satellite, built by West Germany, released a series of barium and lithium canisters that were detonated and subsequently photo-ionized via solar radiation, thereby creating an artificial comet. Another satellite, built by Great Britain and in the vicinity during detonation, carried, as did the first satellite, a comprehensive set of magnetic field, particle and wave instruments. Upon detonation, what was observed by the satellites, as well as by aircraft and ground-based observers, was quite unexpected. The initial deflection of the ion clouds was not in the ambient solar wind's flow direction ([rvec V]) but rather in the direction transverse to the solar wind and the background magnetic field ([rvec V] [times] [rvec B]). This result was not predicted by any existing theories or simulation models; it is the main subject discussed in this dissertation. A large three dimensional computer simulation was produced to demonstrate that this transverse motion can be explained in terms of a rocket effect. Due to the extreme computer resources utilized in producing this work, the computer methods used to complete the calculation and the visualization techniques used to view the results are also discussed.

  20. Why applicants should use computer simulation models to comply with the FERC`s new merger policy

    SciTech Connect (OSTI)

    Frankena, M.W.; Morris, J.R.

    1997-02-01

    Computer models for electric utility use in complying with the US Federal Energy Regulatory Commission policy on mergers are described. Four types of simulation models that are widely used in the electric power industry are considered as tools for analyzing market power issues: dispatch/transportation models, dispatch/unit-commitment models, load-flow models, and load-flow/dispatch models. Basic model capabilities and limitations are described. Uses of the models for other purposes are also noted, including regulatory filings, antitrust litigation, and evaluation of pricing strategies.

    1. Computation Modeling and Assessment of Nanocoatings for Ultra Supercritical Boilers

      SciTech Connect (OSTI)

      J. Shingledecker; D. Gandy; N. Cheruvu; R. Wei; K. Chan

      2011-06-21

      Forced outages and boiler unavailability of coal-fired fossil plants is most often caused by fire-side corrosion of boiler waterwalls and tubing. Reliable coatings are required for Ultrasupercritical (USC) application to mitigate corrosion since these boilers will operate at a much higher temperatures and pressures than in supercritical (565 C {at} 24 MPa) boilers. Computational modeling efforts have been undertaken to design and assess potential Fe-Cr-Ni-Al systems to produce stable nanocrystalline coatings that form a protective, continuous scale of either Al{sub 2}O{sub 3} or Cr{sub 2}O{sub 3}. The computational modeling results identified a new series of Fe-25Cr-40Ni with or without 10 wt.% Al nanocrystalline coatings that maintain long-term stability by forming a diffusion barrier layer at the coating/substrate interface. The computational modeling predictions of microstructure, formation of continuous Al{sub 2}O{sub 3} scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. Advanced coatings, such as MCrAl (where M is Fe, Ni, or Co) nanocrystalline coatings, have been processed using different magnetron sputtering deposition techniques. Several coating trials were performed and among the processing methods evaluated, the DC pulsed magnetron sputtering technique produced the best quality coating with a minimum number of shallow defects and the results of multiple deposition trials showed that the process is repeatable. scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. The cyclic oxidation test results revealed that the nanocrystalline coatings offer better oxidation resistance, in terms of weight loss, localized oxidation, and formation of mixed oxides in the Al{sub 2}O{sub 3} scale, than widely used MCrAlY coatings. However, the ultra-fine grain structure in these coatings, consistent with the computational model predictions, resulted in accelerated Al

    2. CURRENT - A Computer Code for Modeling Two-Dimensional, Chemically Reaccting, Low Mach Number Flows

      SciTech Connect (OSTI)

      Winters, W.S.; Evans, G.H.; Moen, C.D.

      1996-10-01

      This report documents CURRENT, a computer code for modeling two- dimensional, chemically reacting, low Mach number flows including the effects of surface chemistry. CURRENT is a finite volume code based on the SIMPLER algorithm. Additional convergence acceleration for low Peclet number flows is provided using improved boundary condition coupling and preconditioned gradient methods. Gas-phase and surface chemistry is modeled using the CHEMKIN software libraries. The CURRENT user-interface has been designed to be compatible with the Sandia-developed mesh generator and post processor ANTIPASTO and the post processor TECPLOT. This report describes the theory behind the code and also serves as a user`s manual.

    3. Accelerated Climate Modeling for Energy | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility a Category 5 hurricane simulated by the CESM at 13 km resolution An example of a Category 5 hurricane simulated by the CESM at 13 km resolution. Precipitable water (gray scale) shows the detailed dynamical structure in the flow. Strong precipitation is overlaid in red. High resolution is necessary to simulate reasonable numbers of tropical cyclones including Category 4 and 5 storms. Credit: Alan Scott and Mark Taylor, Sandia National Laboratories Accelerated Climate Modeling for

    4. Accelerated Climate Modeling for Energy | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility An example of a Category 5 hurricane simulated by the CESM at 13 km resolution An example of a Category 5 hurricane simulated by the CESM at 13 km resolution. Precipitable water (gray scale) shows the detailed dynamical structure in the flow. Strong precipitation is overlaid in red. High resolution is necessary to simulate reasonable numbers of tropical cyclones including Category 4 and 5 storms. Alan Scott and Mark Taylor, Sandia National Laboratories Accelerated Climate Modeling

    5. Complex functionality with minimal computation. Promise and pitfalls of reduced-tracer ocean biogeochemistry models

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; et al

      2015-12-21

      Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded inmore » the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. Lastly, these results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate ‘‘sub-ecosystem-scale’’ parameterizations.« less

    6. Complex functionality with minimal computation. Promise and pitfalls of reduced-tracer ocean biogeochemistry models

      SciTech Connect (OSTI)

      Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh

      2015-12-21

      Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. Lastly, these results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate ‘‘sub-ecosystem-scale’’ parameterizations.

    7. CAPE-OPEN compliant stochastic modeling and reduced-order model computation capability for APECS system

      SciTech Connect (OSTI)

      Diwekar, Urmila; Shastri, Yogendra (Vishwamitra Research Institute Clarendon Hills, IL); Subrmanyan, Karthik; Zitney, S.E.

      2007-11-04

      APECS (Advanced Process Engineering Co-Simulator) is an integrated software suite that combines the power of process simulation with high-fidelity, computational fluid dynamics (CFD) for improved design, analysis, and optimization of process engineering systems. The APECS system uses commercial process simulation (e.g., Aspen Plus) and CFD (e.g., FLUENT) software integrated with the process-industry standard CAPE-OPEN (CO) interfaces. This breakthrough capability allows engineers to better understand and optimize the fluid mechanics that drive overall power plant performance and efficiency. The focus of this paper is the CAPE-OPEN complaint stochastic modeling and reduced order model computational capability around the APECS system. The usefulness of capabilities is illustrated with coal fired, gasification based, FutureGen power plant simulation. These capabilities are used to generate efficient reduced order models and optimizing model complexities.

    8. Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors

      SciTech Connect (OSTI)

      Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine

      2007-06-01

      Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.

    9. Modeling and Analysis of a Lunar Space Reactor with the Computer...

      Office of Scientific and Technical Information (OSTI)

      Reactor with the Computer Code RELAP5-3DATHENA Citation Details In-Document Search Title: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D...

    10. Towards an Abstraction-Friendly Programming Model for High Productivity and High Performance Computing

      SciTech Connect (OSTI)

      Liao, C; Quinlan, D; Panas, T

      2009-10-06

      General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will

    11. Computational Modeling and Assessment Of Nanocoatings for Ultra Supercritical Boilers

      SciTech Connect (OSTI)

      David W. Gandy; John P. Shingledecker

      2011-04-11

      Forced outages and boiler unavailability in conventional coal-fired fossil power plants is most often caused by fireside corrosion of boiler waterwalls. Industry-wide, the rate of wall thickness corrosion wastage of fireside waterwalls in fossil-fired boilers has been of concern for many years. It is significant that the introduction of nitrogen oxide (NOx) emission controls with staged burners systems has increased reported waterwall wastage rates to as much as 120 mils (3 mm) per year. Moreover, the reducing environment produced by the low-NOx combustion process is the primary cause of accelerated corrosion rates of waterwall tubes made of carbon and low alloy steels. Improved coatings, such as the MCrAl nanocoatings evaluated here (where M is Fe, Ni, and Co), are needed to reduce/eliminate waterwall damage in subcritical, supercritical, and ultra-supercritical (USC) boilers. The first two tasks of this six-task project-jointly sponsored by EPRI and the U.S. Department of Energy (DE-FC26-07NT43096)-have focused on computational modeling of an advanced MCrAl nanocoating system and evaluation of two nanocrystalline (iron and nickel base) coatings, which will significantly improve the corrosion and erosion performance of tubing used in USC boilers. The computational model results showed that about 40 wt.% is required in Fe based nanocrystalline coatings for long-term durability, leading to a coating composition of Fe-25Cr-40Ni-10 wt.% Al. In addition, the long term thermal exposure test results further showed accelerated inward diffusion of Al from the nanocrystalline coatings into the substrate. In order to enhance the durability of these coatings, it is necessary to develop a diffusion barrier interlayer coating such TiN and/or AlN. The third task 'Process Advanced MCrAl Nanocoating Systems' of the six-task project jointly sponsored by the Electric Power Research Institute, EPRI and the U.S. Department of Energy (DE-FC26-07NT43096)- has focused on processing of

    12. Compare Energy Use in Variable Refrigerant Flow Heat Pumps Field Demonstration and Computer Model

      SciTech Connect (OSTI)

      Sharma, Chandan; Raustad, Richard

      2013-06-01

      Variable Refrigerant Flow (VRF) heat pumps are often regarded as energy efficient air-conditioning systems which offer electricity savings as well as reduction in peak electric demand while providing improved individual zone setpoint control. One of the key advantages of VRF systems is minimal duct losses which provide significant reduction in energy use and duct space. However, there is limited data available to show their actual performance in the field. Since VRF systems are increasingly gaining market share in the US, it is highly desirable to have more actual field performance data of these systems. An effort was made in this direction to monitor VRF system performance over an extended period of time in a US national lab test facility. Due to increasing demand by the energy modeling community, an empirical model to simulate VRF systems was implemented in the building simulation program EnergyPlus. This paper presents the comparison of energy consumption as measured in the national lab and as predicted by the program. For increased accuracy in the comparison, a customized weather file was created by using measured outdoor temperature and relative humidity at the test facility. Other inputs to the model included building construction, VRF system model based on lab measured performance, occupancy of the building, lighting/plug loads, and thermostat set-points etc. Infiltration model inputs were adjusted in the beginning to tune the computer model and then subsequent field measurements were compared to the simulation results. Differences between the computer model results and actual field measurements are discussed. The computer generated VRF performance closely resembled the field measurements.

    13. Review of the synergies between computational modeling and experimental characterization of materials across length scales

      SciTech Connect (OSTI)

      Dingreville, Rémi; Karnesky, Richard A.; Puel, Guillaume; Schmitt, Jean -Hubert

      2015-11-16

      With the increasing interplay between experimental and computational approaches at multiple length scales, new research directions are emerging in materials science and computational mechanics. Such cooperative interactions find many applications in the development, characterization and design of complex material systems. This manuscript provides a broad and comprehensive overview of recent trends in which predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure–property relationships and study various physical phenomena and mechanisms. The focus of this review is on the intersections of multiscale materials experiments and modeling relevant to the materials mechanics community. After a general discussion on the perspective from various communities, the article focuses on the latest experimental and theoretical opportunities. Emphasis is given to the role of experiments in multiscale models, including insights into how computations can be used as discovery tools for materials engineering, rather than to “simply” support experimental work. This is illustrated by examples from several application areas on structural materials. In conclusion this manuscript ends with a discussion on some problems and open scientific questions that are being explored in order to advance this relatively new field of research.

    14. Review of the synergies between computational modeling and experimental characterization of materials across length scales

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Dingreville, Rémi; Karnesky, Richard A.; Puel, Guillaume; Schmitt, Jean -Hubert

      2015-11-16

      With the increasing interplay between experimental and computational approaches at multiple length scales, new research directions are emerging in materials science and computational mechanics. Such cooperative interactions find many applications in the development, characterization and design of complex material systems. This manuscript provides a broad and comprehensive overview of recent trends in which predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure–property relationships and study various physical phenomena and mechanisms. The focus of this review is on the intersections of multiscale materials experiments and modeling relevant to the materials mechanicsmore » community. After a general discussion on the perspective from various communities, the article focuses on the latest experimental and theoretical opportunities. Emphasis is given to the role of experiments in multiscale models, including insights into how computations can be used as discovery tools for materials engineering, rather than to “simply” support experimental work. This is illustrated by examples from several application areas on structural materials. In conclusion this manuscript ends with a discussion on some problems and open scientific questions that are being explored in order to advance this relatively new field of research.« less

    15. Efficient Computation of Info-Gap Robustness for Finite Element Models

      SciTech Connect (OSTI)

      Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

      2012-07-05

      A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

    16. Computations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing ... Heavy Duty Fuels DISI Combustion HCCISCCI Fundamentals Spray Combustion Modeling ...

    17. Computations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management Programs Advanced Nuclear Energy

    18. Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. ! Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13 Garching bei München, Germany, July 17, 2007 ScicomP 13, July 17, 2007, Garching Overview * About Bassi * Memory on Bassi * Large Page Memory (It's Great!) * System Configuration * Large Page

    19. HYDRODYNAMIC MODELS FOR SLURRY BUBBLE COLUMN REACTORS. FINAL TECHNICAL REPORT ALSO INCLUDES THE QUARTERLY TECHNICAL REPORT FOR THE PERIOD 01/01/1997 - 03/31/1997.

      SciTech Connect (OSTI)

      DIMITRI GIDASPOW

      1997-08-15

      The objective of this study is to develop a predictive experimentally verified computational fluid dynamic (CFD) three phase model. It predicts the gas, liquid and solid hold-ups (volume fractions) and flow patterns in the industrially important bubble-coalesced (churn-turbulent) regime. The input into the model can be either particulate viscosities as measured with a Brookfield viscometer or effective restitution coefficient for particles. A combination of x-ray and {gamma}-ray densitometers was used to measure solid and liquid volume fractions. There is a fair agreement between the theory and the experiment. A CCD camera was used to measure instantaneous particle velocities. There is a good agreement between the computed time average velocities and the measurements. There is an excellent agreement between the viscosity of 800 {micro}m glass beads obtained from measurement of granular temperature (random kinetic energy of particles) and the measurement using a Brookfield viscometer. A relation between particle Reynolds stresses and granular temperature was found for developed flow. Such measurement and computations gave a restitution coefficient for a methanol catalyst to be about 0.9. A transient, two-dimensional hydrodynamic model for production of methanol from syn-gas in an Air Products/DOE LaPorte slurry bubble column reactor was developed. The model predicts downflow of catalyst at the walls and oscillatory particle and gas flow at the center, with a frequency of about 0.7 Hertz. The computed temperature variation in the rector with heat exchangers was only about 5 K, indicating good thermal management. The computed slurry height, the gas holdup and the rate of methanol production agree with LaPorte's reported data. Unlike the previous models in the literature, this model computes the gas and the particle holdups and the particle rheology. The only adjustable parameter in the model is the effective particle restitution coefficient.

    20. Computational Science and Engineering

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science and Engineering NETL's Computational Science and Engineering competency consists of conducting applied scientific research and developing physics-based simulation models, methods, and tools to support the development and deployment of novel process and equipment designs. Research includes advanced computations to generate information beyond the reach of experiments alone by integrating experimental and computational sciences across different length and time scales. Specific

    1. CASTING DEFECT MODELING IN AN INTEGRATED COMPUTATIONAL MATERIALS ENGINEERING APPROACH

      SciTech Connect (OSTI)

      Sabau, Adrian S

      2015-01-01

      To accelerate the introduction of new cast alloys, the simultaneous modeling and simulation of multiphysical phenomena needs to be considered in the design and optimization of mechanical properties of cast components. The required models related to casting defects, such as microporosity and hot tears, are reviewed. Three aluminum alloys are considered A356, 356 and 319. The data on calculated solidification shrinkage is presented and its effects on microporosity levels discussed. Examples are given for predicting microporosity defects and microstructure distribution for a plate casting. Models to predict fatigue life and yield stress are briefly highlighted here for the sake of completion and to illustrate how the length scales of the microstructure features as well as porosity defects are taken into account for modeling the mechanical properties. Thus, the data on casting defects, including microstructure features, is crucial for evaluating the final performance-related properties of the component. ACKNOWLEDGEMENTS This work was performed under a Cooperative Research and Development Agreement (CRADA) with the Nemak Inc., and Chrysler Co. for the project "High Performance Cast Aluminum Alloys for Next Generation Passenger Vehicle Engines. The author would also like to thank Amit Shyam for reviewing the paper and Andres Rodriguez of Nemak Inc. Research sponsored by the U. S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office, as part of the Propulsion Materials Program under contract DE-AC05-00OR22725 with UT-Battelle, LLC. Part of this research was conducted through the Oak Ridge National Laboratory's High Temperature Materials Laboratory User Program, which is sponsored by the U. S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Program.

    2. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

      SciTech Connect (OSTI)

      Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie; Mandelli, Diego; Smith, Curtis Lee

      2015-09-01

      The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.

    3. Designing computing system architecture and models for the HL-LHC era

      SciTech Connect (OSTI)

      Bauerdick, L.; Bockelman, B.; Elmer, P.; Gowdy, S.; Tadel, M.; Wurthwein, F.

      2015-01-01

      This work describes a programme to study the computing model in CMS after the next long shutdown near the end of the decade.

    4. Modeling-Computer Simulations At U.S. West Region (Sabin, Et...

      Open Energy Info (EERE)

      navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At U.S. West Region (Sabin, Et Al., 2004) Exploration Activity Details...

    5. Modeling-Computer Simulations At Cove Fort Area (Toksoz, Et Al...

      Open Energy Info (EERE)

      navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Cove Fort Area (Toksoz, Et Al, 2010) Exploration Activity Details...

    6. Modeling-Computer Simulations At U.S. West Region (Williams ...

      Open Energy Info (EERE)

      navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At U.S. West Region (Williams & Deangelo, 2008) Exploration Activity...

    7. final report for Center for Programming Models for Scalable Parallel Computing

      SciTech Connect (OSTI)

      Johnson, Ralph E

      2013-04-10

      This is the final report of the work on parallel programming patterns that was part of the Center for Programming Models for Scalable Parallel Computing

    8. TURTLE with MAD input (Trace Unlimited Rays Through Lumped Elements) -- A computer program for simulating charged particle beam transport systems and DECAY TURTLE including decay calculations

      SciTech Connect (OSTI)

      Carey, D.C.

      1999-12-09

      TURTLE is a computer program useful for determining many characteristics of a particle beam once an initial design has been achieved, Charged particle beams are usually designed by adjusting various beam line parameters to obtain desired values of certain elements of a transfer or beam matrix. Such beam line parameters may describe certain magnetic fields and their gradients, lengths and shapes of magnets, spacings between magnetic elements, or the initial beam accepted into the system. For such purposes one typically employs a matrix multiplication and fitting program such as TRANSPORT. TURTLE is designed to be used after TRANSPORT. For convenience of the user, the input formats of the two programs have been made compatible. The use of TURTLE should be restricted to beams with small phase space. The lumped element approximation, described below, precludes the inclusion of the effect of conventional local geometric aberrations (due to large phase space) or fourth and higher order. A reading of the discussion below will indicate clearly the exact uses and limitations of the approach taken in TURTLE.

    9. High-Performance Computer Modeling of the Cosmos-Iridium Collision

      SciTech Connect (OSTI)

      Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W

      2009-08-28

      This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

    10. Review of the synergies between computational modeling and experimenta...

      Office of Scientific and Technical Information (OSTI)

      length scales, new research directions are emerging in materials science and computational mechanics. ... Report Number(s): SAND--2015-10307J Journal ID: ISSN 0022-2461; PII: ...

    11. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 1: Theory and Computational Model

      SciTech Connect (OSTI)

      Nichols, B.D.; Mueller, C.; Necker, G.A.; Travis, J.R.; Spore, J.W.; Lam, K.L.; Royl, P.; Redlinger, R.; Wilson, T.L.

      1998-10-01

      Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior (1) in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and (2) during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included

    12. Computer modeling of properties of complex molecular systems

      SciTech Connect (OSTI)

      Kulkova, E.Yu.; Khrenova, M.G.; Polyakov, I.V.

      2015-03-10

      Large molecular aggregates present important examples of strongly nonhomogeneous systems. We apply combined quantum mechanics / molecular mechanics approaches that assume treatment of a part of the system by quantum-based methods and the rest of the system with conventional force fields. Herein we illustrate these computational approaches by two different examples: (1) large-scale molecular systems mimicking natural photosynthetic centers, and (2) components of prospective solar cells containing titan dioxide and organic dye molecules. We demonstrate that modern computational tools are capable to predict structures and spectra of such complex molecular aggregates.

    13. Improvements in fast-response flood modeling: desktop parallel computing and domain tracking

      SciTech Connect (OSTI)

      Judi, David R; Mcpherson, Timothy N; Burian, Steven J

      2009-01-01

      It is becoming increasingly important to have the ability to accurately forecast flooding, as flooding accounts for the most losses due to natural disasters in the world and the United States. Flood inundation modeling has been dominated by one-dimensional approaches. These models are computationally efficient and are considered by many engineers to produce reasonably accurate water surface profiles. However, because the profiles estimated in these models must be superimposed on digital elevation data to create a two-dimensional map, the result may be sensitive to the ability of the elevation data to capture relevant features (e.g. dikes/levees, roads, walls, etc...). Moreover, one-dimensional models do not explicitly represent the complex flow processes present in floodplains and urban environments and because two-dimensional models based on the shallow water equations have significantly greater ability to determine flow velocity and direction, the National Research Council (NRC) has recommended that two-dimensional models be used over one-dimensional models for flood inundation studies. This paper has shown that two-dimensional flood modeling computational time can be greatly reduced through the use of Java multithreading on multi-core computers which effectively provides a means for parallel computing on a desktop computer. In addition, this paper has shown that when desktop parallel computing is coupled with a domain tracking algorithm, significant computation time can be eliminated when computations are completed only on inundated cells. The drastic reduction in computational time shown here enhances the ability of two-dimensional flood inundation models to be used as a near-real time flood forecasting tool, engineering, design tool, or planning tool. Perhaps even of greater significance, the reduction in computation time makes the incorporation of risk and uncertainty/ensemble forecasting more feasible for flood inundation modeling (NRC 2000; Sayers et al

    14. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

      2016-07-20

      A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

    15. COMPUTATIONAL FLUID DYNAMICS MODELING OF SCALED HANFORD DOUBLE SHELL TANK MIXING - CFD MODELING SENSITIVITY STUDY RESULTS

      SciTech Connect (OSTI)

      JACKSON VL

      2011-08-31

      The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

    16. Validation of the thermospheric vector spherical harmonic (VSH) computer model. Master's thesis

      SciTech Connect (OSTI)

      Davis, J.L.

      1991-01-01

      A semi-empirical computer model of the lower thermosphere has been developed that provides a description of the composition and dynamics of the thermosphere (Killeen et al., 1992). Input variables needed to run the VSH model include time, space and geophysical conditions. One of the output variables the model provides, neutral density, is of particular interest to the U.S. Air Force. Neutral densities vary both as a result of change in solar flux (eg. the solar cycle) and as a result of changes in the magnetosphere (eg. large changes occur in neutral density during geomagnetic storms). Satellites in earth orbit experience aerodynamic drag due to the atmospheric density of the thermosphere. Variability in the neutral density described above affects the drag a satellite experiences and as a result can change the orbital characteristics of the satellite. These changes make it difficult to track the satellite's position. Therefore, it is particularly important to insure that the accuracy of the model's neutral density is optimized for all input parameters. To accomplish this, a validation program was developed to evaluate the strengths and weaknesses of the model's density output by comparing it to SETA-2 (satellite electrostatic accelerometer) total mass density measurements.

    17. Modeling-Computer Simulations (Laney, 2005) | Open Energy Information

      Open Energy Info (EERE)

      in the near surface: Available technologies for monitoring CO2 in the near-surface environment include (1) the infrared gas analyzer (IRGA) for measurement of concentrations at...

    18. Mathematical modeling and computer simulation of processes in energy systems

      SciTech Connect (OSTI)

      Hanjalic, K.C. )

      1990-01-01

      This book is divided into the following chapters. Modeling techniques and tools (fundamental concepts of modeling); 2. Fluid flow, heat and mass transfer, chemical reactions, and combustion; 3. Processes in energy equipment and plant components (boilers, steam and gas turbines, IC engines, heat exchangers, pumps and compressors, nuclear reactors, steam generators and separators, energy transport equipment, energy convertors, etc.); 4. New thermal energy conversion technologies (MHD, coal gasification and liquefaction fluidized-bed combustion, pulse-combustors, multistage combustion, etc.); 5. Combined cycles and plants, cogeneration; 6. Dynamics of energy systems and their components; 7. Integrated approach to energy systems modeling, and 8. Application of modeling in energy expert systems.

    19. Modeling-Computer Simulations At Kilauea East Rift Geothermal...

      Open Energy Info (EERE)

      importance of water convection for distributing heat in the East Rift Zone. References Albert J. Rudman, David Epp (1983) Conduction Models Of The Temperature Distribution In The...

    20. Modeling-Computer Simulations (Walker, Et Al., 2005) | Open Energy...

      Open Energy Info (EERE)

      occurrence model for geothermal systems based on fundamental geologic data. References J. D. Walker, A. E. Sabin, J. R. Unruh, J. Combs, F. C. Monastero (2005) Development Of...

    1. Recommendations for computer modeling codes to support the UMTRA groundwater restoration project

      SciTech Connect (OSTI)

      Tucker, M.D.; Khan, M.A.

      1996-04-01

      The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended.

    2. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

      SciTech Connect (OSTI)

      Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

      2011-01-01

      In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on high performance computing platforms.

    3. Modeling and Analysis of a Lunar Space Reactor with the Computer Code

      Office of Scientific and Technical Information (OSTI)

      RELAP5-3D/ATHENA (Conference) | SciTech Connect Conference: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA Citation Details In-Document Search Title: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE

    4. Demonstrating the improvement of predictive maturity of a computational model

      SciTech Connect (OSTI)

      Hemez, Francois M; Unal, Cetin; Atamturktur, Huriye S

      2010-01-01

      We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

    5. GEO3D - Three-Dimensional Computer Model of a Ground Source Heat Pump System

      DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

      James Menart

      2013-06-07

      This file is the setup file for the computer program GEO3D. GEO3D is a computer program written by Jim Menart to simulate vertical wells in conjunction with a heat pump for ground source heat pump (GSHP) systems. This is a very detailed three-dimensional computer model. This program produces detailed heat transfer and temperature field information for a vertical GSHP system.

    6. Modeling-Computer Simulations (Gritto & Majer) | Open Energy...

      Open Energy Info (EERE)

      are shown in Figure 1. The parameters of the fault were modeled after Coates and Schoenberg (1995), where the orientation of the fault relative to the finite-difference grid...

    7. Computer support to run models of the atmosphere. Final report

      SciTech Connect (OSTI)

      Fung, I.

      1996-08-30

      This research is focused on a better quantification of the variations in CO{sub 2} exchanges between the atmosphere and biosphere and the factors responsible for these exchangers. The principal approach is to infer the variations in the exchanges from variations in the atmospheric CO{sub 2} distribution. The principal tool involves using a global three-dimensional tracer transport model to advect and convect CO{sub 2} in the atmosphere. The tracer model the authors used was developed at the Goddard institute for Space Studies (GISS) and is derived from the GISS atmospheric general circulation model. A special run of the GCM is made to save high-frequency winds and mixing statistics for the tracer model.

    8. Cielo Computational Environment Usage Model With Mappings to...

      Office of Scientific and Technical Information (OSTI)

      This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure ...

    9. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

      2016-08-19

      Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

    10. DEVELOPMENT OF PLASTICITY MODEL USING NON ASSOCIATED FLOW RULE FOR HCP MATERIALS INCLUDING ZIRCONIUM FOR NUCLEAR APPLICATIONS

      SciTech Connect (OSTI)

      Michael V. Glazoff; Jeong-Whan Yoon

      2013-08-01

      In this report (prepared in collaboration with Prof. Jeong Whan Yoon, Deakin University, Melbourne, Australia) a research effort was made to develop a non associated flow rule for zirconium. Since Zr is a hexagonally close packed (hcp) material, it is impossible to describe its plastic response under arbitrary loading conditions with any associated flow rule (e.g. von Mises). As a result of strong tension compression asymmetry of the yield stress and anisotropy, zirconium displays plastic behavior that requires a more sophisticated approach. Consequently, a new general asymmetric yield function has been developed which accommodates mathematically the four directional anisotropies along 0 degrees, 45 degrees, 90 degrees, and biaxial, under tension and compression. Stress anisotropy has been completely decoupled from the r value by using non associated flow plasticity, where yield function and plastic potential have been treated separately to take care of stress and r value directionalities, respectively. This theoretical development has been verified using Zr alloys at room temperature as an example as these materials have very strong SD (Strength Differential) effect. The proposed yield function reasonably well models the evolution of yield surfaces for a zirconium clock rolled plate during in plane and through thickness compression. It has been found that this function can predict both tension and compression asymmetry mathematically without any numerical tolerance and shows the significant improvement compared to any reported functions. Finally, in the end of the report, a program of further research is outlined aimed at constructing tensorial relationships for the temperature and fluence dependent creep surfaces for Zr, Zircaloy 2, and Zircaloy 4.

    11. FINITE ELEMENT MODELS FOR COMPUTING SEISMIC INDUCED SOIL PRESSURES ON DEEPLY EMBEDDED NUCLEAR POWER PLANT STRUCTURES.

      SciTech Connect (OSTI)

      XU, J.; COSTANTINO, C.; HOFMAYER, C.

      2006-06-26

      PAPER DISCUSSES COMPUTATIONS OF SEISMIC INDUCED SOIL PRESSURES USING FINITE ELEMENT MODELS FOR DEEPLY EMBEDDED AND OR BURIED STIFF STRUCTURES SUCH AS THOSE APPEARING IN THE CONCEPTUAL DESIGNS OF STRUCTURES FOR ADVANCED REACTORS.

    12. Towards a Computational Model of a Methane Producing Archaeum

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Peterson, Joseph R.; Labhsetwar, Piyush; Ellermeier, Jeremy R.; Kohler, Petra R. A.; Jain, Ankur; Ha, Taekjip; Metcalf, William W.; Luthey-Schulten, Zaida

      2014-01-01

      Progress towards a complete model of the methanogenic archaeum Methanosarcina acetivorans is reported. We characterized size distribution of the cells using differential interference contrast microscopy, finding them to be ellipsoidal with mean length and width of 2.9  μ m and 2.3  μ m, respectively, when grown on methanol and 30% smaller when grown on acetate. We used the single molecule pull down (SiMPull) technique to measure average copy number of the Mcr complex and ribosomes. A kinetic model for the methanogenesis pathways based on biochemical studies and recent metabolic reconstructions for several related methanogens is presented. In this model,more » 26 reactions in the methanogenesis pathways are coupled to a cell mass production reaction that updates enzyme concentrations. RNA expression data (RNA-seq) measured for cell cultures grown on acetate and methanol is used to estimate relative protein production per mole of ATP consumed. The model captures the experimentally observed methane production rates for cells growing on methanol and is most sensitive to the number of methyl-coenzyme-M reductase (Mcr) and methyl-tetrahydromethanopterin:coenzyme-M methyltransferase (Mtr) proteins. A draft transcriptional regulation network based on known interactions is proposed which we intend to integrate with the kinetic model to allow dynamic regulation.« less

    13. Theoretical and computer models of detonation in solid explosives

      SciTech Connect (OSTI)

      Tarver, C.M.; Urtiew, P.A.

      1997-10-01

      Recent experimental and theoretical advances in understanding energy transfer and chemical kinetics have led to improved models of detonation waves in solid explosives. The Nonequilibrium Zeldovich - von Neumann - Doring (NEZND) model is supported by picosecond laser experiments and molecular dynamics simulations of the multiphonon up-pumping and internal vibrational energy redistribution (IVR) processes by which the unreacted explosive molecules are excited to the transition state(s) preceding reaction behind the leading shock front(s). High temperature, high density transition state theory calculates the induction times measured by laser interferometric techniques. Exothermic chain reactions form product gases in highly excited vibrational states, which have been demonstrated to rapidly equilibrate via supercollisions. Embedded gauge and Fabry-Perot techniques measure the rates of reaction product expansion as thermal and chemical equilibrium is approached. Detonation reaction zone lengths in carbon-rich condensed phase explosives depend on the relatively slow formation of solid graphite or diamond. The Ignition and Growth reactive flow model based on pressure dependent reaction rates and Jones-Wilkins-Lee (JWL) equations of state has reproduced this nanosecond time resolved experimental data and thus has yielded accurate average reaction zone descriptions in one-, two- and three- dimensional hydrodynamic code calculations. The next generation reactive flow model requires improved equations of state and temperature dependent chemical kinetics. Such a model is being developed for the ALE3D hydrodynamic code, in which heat transfer and Arrhenius kinetics are intimately linked to the hydrodynamics.

    14. Computer Modeling VRF Heat Pumps in Commercial Buildings using EnergyPlus

      SciTech Connect (OSTI)

      Raustad, Richard

      2013-06-01

      Variable Refrigerant Flow (VRF) heat pumps are increasingly used in commercial buildings in the United States. Monitored energy use of field installations have shown, in some cases, savings exceeding 30% compared to conventional heating, ventilating, and air-conditioning (HVAC) systems. A simulation study was conducted to identify the installation or operational characteristics that lead to energy savings for VRF systems. The study used the Department of Energy EnergyPlus? building simulation software and four reference building models. Computer simulations were performed in eight U.S. climate zones. The baseline reference HVAC system incorporated packaged single-zone direct-expansion cooling with gas heating (PSZ-AC) or variable-air-volume systems (VAV with reheat). An alternate baseline HVAC system using a heat pump (PSZ-HP) was included for some buildings to directly compare gas and electric heating results. These baseline systems were compared to a VRF heat pump model to identify differences in energy use. VRF systems combine multiple indoor units with one or more outdoor unit(s). These systems move refrigerant between the outdoor and indoor units which eliminates the need for duct work in most cases. Since many applications install duct work in unconditioned spaces, this leads to installation differences between VRF systems and conventional HVAC systems. To characterize installation differences, a duct heat gain model was included to identify the energy impacts of installing ducts in unconditioned spaces. The configuration of variable refrigerant flow heat pumps will ultimately eliminate or significantly reduce energy use due to duct heat transfer. Fan energy is also studied to identify savings associated with non-ducted VRF terminal units. VRF systems incorporate a variable-speed compressor which may lead to operational differences compared to single-speed compression systems. To characterize operational differences, the computer model performance curves used

    15. Computer model for characterizing, screening, and optimizing electrolyte systems

      SciTech Connect (OSTI)

      Gering, Kevin L.

      2015-06-15

      Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced models are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.

    16. Computational Combustion

      SciTech Connect (OSTI)

      Westbrook, C K; Mizobuchi, Y; Poinsot, T J; Smith, P J; Warnatz, J

      2004-08-26

      Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surface and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.

    17. ONSET OF CHAOS IN A MODEL OF QUANTUM COMPUTATION (Conference) | SciTech

      Office of Scientific and Technical Information (OSTI)

      Connect Conference: ONSET OF CHAOS IN A MODEL OF QUANTUM COMPUTATION Citation Details In-Document Search Title: ONSET OF CHAOS IN A MODEL OF QUANTUM COMPUTATION Recently, the question of a relevance of the so-called quantum chaos has been raised in applications to quantum computation [2,3]. Indeed, according to the general approach to closed systems of finite number of interacting Fermi-particles (see, e.g. [4,5]), with an increase of an interaction between qubits a kind of chaos is expected

    18. Emergency Response Equipment and Related Training: Airborne Radiological Computer System (Model II)

      SciTech Connect (OSTI)

      David P. Colton

      2007-02-28

      The materials included in the Airborne Radiological Computer System, Model-II (ARCS-II) were assembled with several considerations in mind. First, the system was designed to measure and record the airborne gamma radiation levels and the corresponding latitude and longitude coordinates, and to provide a first overview look of the extent and severity of an accident's impact. Second, the portable system had to be light enough and durable enough that it could be mounted in an aircraft, ground vehicle, or watercraft. Third, the system must control the collection and storage of the data, as well as provide a real-time display of the data collection results to the operator. The notebook computer and color graphics printer components of the system would only be used for analyzing and plotting the data. In essence, the provided equipment is composed of an acquisition system and an analysis system. The data can be transferred from the acquisition system to the analysis system at the end of the data collection or at some other agreeable time.

    19. COMPUTATIONAL THERMODYNAMIC MODELING OF HOT CORROSION OF ALLLOYS HAYNES 242 AND HASTELLOYTMN FOR MOLTEN SALT SERVICE

      SciTech Connect (OSTI)

      Michael V. Glazoff; Piyush Sabharwall; Akira Tokuhiro

      2014-09-01

      An evaluation of thermodynamic aspects of hot corrosion of the superalloys Haynes 242 and HastelloyTM N in the eutectic mixtures of KF and ZrF4 is carried out for development of Advanced High Temperature Reactor (AHTR). This work models the behavior of several superalloys, potential candidates for the AHTR, using computational thermodynamics tool (ThermoCalc), leading to the development of thermodynamic description of the molten salt eutectic mixtures, and on that basis, mechanistic prediction of hot corrosion. The results from these studies indicated that the principal mechanism of hot corrosion was associated with chromium leaching for all of the superalloys described above. However, HastelloyTM N displayed the best hot corrosion performance. This was not surprising given it was developed originally to withstand the harsh conditions of molten salt environment. However, the results obtained in this study provided confidence in the employed methods of computational thermodynamics and could be further used for future alloy design efforts. Finally, several potential solutions to mitigate hot corrosion were proposed for further exploration, including coating development and controlled scaling of intermediate compounds in the KF-ZrF4 system.

    20. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

      SciTech Connect (OSTI)

      Vigil,Benny Manuel; Ballance, Robert; Haskell, Karen

      2012-08-09

      Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

    1. Computational modeling of drug-resistant bacteria. Final report

      SciTech Connect (OSTI)

      MacDougall, Preston

      2015-03-12

      Initial proposal summary: The evolution of antibiotic-resistant mutants among bacteria (superbugs) is a persistent and growing threat to public health. In many ways, we are engaged in a war with these microorganisms, where the corresponding arms race involves chemical weapons and biological targets. Just as advances in microelectronics, imaging technology and feature recognition software have turned conventional munitions into smart bombs, the long-term objectives of this proposal are to develop highly effective antibiotics using next-generation biomolecular modeling capabilities in tandem with novel subatomic feature detection software. Using model compounds and targets, our design methodology will be validated with correspondingly ultra-high resolution structure-determination methods at premier DOE facilities (single-crystal X-ray diffraction at Argonne National Laboratory, and neutron diffraction at Oak Ridge National Laboratory). The objectives and accomplishments are summarized.

    2. Computational models for the berry phase in semiconductor quantum dots

      SciTech Connect (OSTI)

      Prabhakar, S. Melnik, R. V. N.; Sebetci, A.

      2014-10-06

      By developing a new model and its finite element implementation, we analyze the Berry phase low-dimensional semiconductor nanostructures, focusing on quantum dots (QDs). In particular, we solve the Schrdinger equation and investigate the evolution of the spin dynamics during the adiabatic transport of the QDs in the 2D plane along circular trajectory. Based on this study, we reveal that the Berry phase is highly sensitive to the Rashba and Dresselhaus spin-orbit lengths.

    3. Computer modeling of a CFB (circulating fluidized bed) gasifier

      SciTech Connect (OSTI)

      Gidaspow, D.; Ding, J.

      1990-06-01

      The overall objective of this investigation is to develop experimentally verified models for circulating fluidized bed (CFB) combustors. This report presents an extension of our cold flow modeling of a CFB given in our first quarterly report of this project and published in Numerical Methods for Multiphase Flows'' edited by I. Celik, D. Hughes, C. T. Crowe and D. Lankford, FED-Vol.91, American Society of Mechanical Engineering, pp47--56 (1990). The title of the paper is Multiphase Navier-Stokes Equation Solver'' by D. Gidaspow, J. Ding and U.K. Jayaswal. To the two dimensional code described in the above paper we added the energy equations and the conservation of species equations to describe a synthesis gas from char producer. Under the simulation conditions the injected oxygen reacted near the inlet. The solid-gas mixing was sufficiently rapid that no undesirable hot spots were produced. This simulation illustrates the code's capability to model CFB reactors. 15 refs., 20 figs.

    4. Computer model for characterizing, screening, and optimizing electrolyte systems

      Energy Science and Technology Software Center (OSTI)

      2015-06-15

      Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced modelsmore » are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.« less

    5. Computer-Aided Construction of Combustion Chemistry Models

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Constructing Accurate Combustion Chemistry Models: Butanols William H. Green & Michael Harper MIT Dept. of Chem. Eng. CEFRC Annual Meeting, Sept. 2010 The people who did this work: Dr. C. Franklin Goldsmith Greg Magoon Shamel Merchant Dr. Sumathy Raman Dr. Sandeep Sharma Prof. Kevin Van Geem Steven Pyl We are also grateful to: Joshua Allen Prof. Paul Barton Dr. Stephen Klippenstein Prof. Guy Marin Jeffrey Mo Dr. S-A Seyed-Reihani Dr. Richard West & MANY CEFRC MEMBERS One of Our Project's

    6. Computational Human Performance Modeling For Alarm System Design

      SciTech Connect (OSTI)

      Jacques Hugo

      2012-07-01

      The introduction of new technologies like adaptive automation systems and advanced alarms processing and presentation techniques in nuclear power plants is already having an impact on the safety and effectiveness of plant operations and also the role of the control room operator. This impact is expected to escalate dramatically as more and more nuclear power utilities embark on upgrade projects in order to extend the lifetime of their plants. One of the most visible impacts in control rooms will be the need to replace aging alarm systems. Because most of these alarm systems use obsolete technologies, the methods, techniques and tools that were used to design the previous generation of alarm system designs are no longer effective and need to be updated. The same applies to the need to analyze and redefine operators alarm handling tasks. In the past, methods for analyzing human tasks and workload have relied on crude, paper-based methods that often lacked traceability. New approaches are needed to allow analysts to model and represent the new concepts of alarm operation and human-system interaction. State-of-the-art task simulation tools are now available that offer a cost-effective and efficient method for examining the effect of operator performance in different conditions and operational scenarios. A discrete event simulation system was used by human factors researchers at the Idaho National Laboratory to develop a generic alarm handling model to examine the effect of operator performance with simulated modern alarm system. It allowed analysts to evaluate alarm generation patterns as well as critical task times and human workload predicted by the system.

    7. PLUVIUS: a generalized one-dimensional model of reactive pollutant behavior, including dry deposition, precipitation formation, and wet removal. Second edition

      SciTech Connect (OSTI)

      Easter, R.C.; Hales, J.M.

      1984-11-01

      This report is a second-edition user's manual for the PLUVIUS reactive-storm model. The PLUVIUS code simulates the formation of storm systems of a variety of types, and characterizes the behavior of air pollutants as they flow through, react within, and are scavenged by the storms. The computer code supplied with this report is known as PLUVIUS MOD 5.0, and is a substantial improvement over the MOD 3.1 version given in the original user's manual. Example applications of MOD 5.0 are given in the report to facilitate rapid application of the code for a variety of specific uses. 22 references, 7 figures, 48 tables.

    8. CORCON-MOD3: An integrated computer model for analysis of molten core-concrete interactions. User`s manual

      SciTech Connect (OSTI)

      Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O.

      1993-10-01

      The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.

    9. The application of computer modeling to health effect research

      SciTech Connect (OSTI)

      Yang, R.S.H.

      1996-12-31

      In the United States, estimates show that more than 30,000 hazardous waste disposal sites exist, not including military installations, U.S. Department of Energy nuclear facilities, and hundreds and thousands of underground fuel storage tanks; these sites undoubtedly have their own respective hazardous waste chemical problems. When so many sites contain hazardous chemicals, how does one study the health effects of the chemicals at these sites? There could be many different answers, but none would be perfect. For an area as complex and difficult as the study of chemical mixtures associated with hazardous waste disposal sites, there are no perfect approaches and protocols. Human exposure to chemicals, be it environmental or occupational, is rarely, if ever, limited to a single chemical. Therefore, it is essential that we consider multiple chemical effects and interactions in our risk assessment process. Systematic toxicity testing of chemical mixtures in the environment or workplace that uses conventional toxicology methodologies is highly impractical because of the immense numbers of mixtures involved. For example, about 600,000 chemicals are being used in our society. Just considering binary chemical mixtures, this means that there could be 600,000 x 599,999/2 = 359,999,400,000 pairs of chemicals. Assuming that only one in a million of these pairs of chemicals acts synergistically or has other toxicologic interactions, there would still be 359,999 binary chemical mixtures possessing toxicologic interactions. Moreover, toxicologic interactions undoubtedly exist among chemical mixtures with three or more component chemicals; the number of possible combinations for these latter mixtures is almost infinite. These are astronomically large numbers with respect to systematic toxicity testing. 22 refs., 5 figs., 1 tab.

    10. R&D for computational cognitive and social models : foundations for model evaluation through verification and validation (final LDRD report).

      SciTech Connect (OSTI)

      Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.; McNamara, Laura A.; Trucano, Timothy Guy

      2008-09-01

      Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high

    11. Techno-Economic Modeling - Building New Battery Systems on the Computer -

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Joint Center for Energy Storage Research October 22, 2015, Accomplishments Techno-Economic Modeling - Building New Battery Systems on the Computer JCESR is applying techno-economic models to project the performance and cost of a wide array of promising new battery systems before they are prototyped. The results from techno-economic modeling establish performance "floors" for discovery science teams looking for new anodes, cathodes, and electrolytes for a beyond lithium-ion battery,

    12. Briefing package for the Yucca Flat pre-emptive review, including overview, UZ model, SZ volcanics model and summary and conclusions sections

      SciTech Connect (OSTI)

      Kwicklis, Edward Michael; Keating, Elizabeth H

      2010-12-02

      Much progress has been made in the last several years in modeling radionuclide transport from tests conducted both in the unsaturated zone and saturated volcanic rocks of Yucca Flat, Nevada. The presentations to the DOE NNSA pre-emptive review panel contained herein document the progress to date, and discuss preliminary conclusions regarding the present and future extents of contamination resulting from past nuclear tests. The presentations also discuss possible strategies for addressing uncertainty in the model results.

    13. Verification of a VRF Heat Pump Computer Model in EnergyPlus

      SciTech Connect (OSTI)

      Nigusse, Bereket; Raustad, Richard

      2013-06-01

      This paper provides verification results of the EnergyPlus variable refrigerant flow (VRF) heat pump computer model using manufacturer's performance data. The paper provides an overview of the VRF model, presents the verification methodology, and discusses the results. The verification provides quantitative comparison of full and part-load performance to manufacturer's data in cooling-only and heating-only modes of operation. The VRF heat pump computer model uses dual range bi-quadratic performance curves to represent capacity and Energy Input Ratio (EIR) as a function of indoor and outdoor air temperatures, and dual range quadratic performance curves as a function of part-load-ratio for modeling part-load performance. These performance curves are generated directly from manufacturer's published performance data. The verification compared the simulation output directly to manufacturer's performance data, and found that the dual range equation fit VRF heat pump computer model predicts the manufacturer's performance data very well over a wide range of indoor and outdoor temperatures and part-load conditions. The predicted capacity and electric power deviations are comparbale to equation-fit HVAC computer models commonly used for packaged and split unitary HVAC equipment.

    14. A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers

      SciTech Connect (OSTI)

      Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan; Parker, Jack C.; Watson, David B; Jardine, Philip M

      2010-01-01

      Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code, the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.

    15. Expansion Hamiltonian model for a diatomic molecule adsorbed on a surface: Vibrational states of the CO/Cu(100) system including surface vibrations

      SciTech Connect (OSTI)

      Meng, Qingyong; Meyer, Hans-Dieter

      2015-10-28

      Molecular-surface studies are often done by assuming a corrugated, static (i.e., rigid) surface. To be able to investigate the effects that vibrations of surface atoms may have on spectra and cross sections, an expansion Hamiltonian model is proposed on the basis of the recently reported [R. Marquardt et al., J. Chem. Phys. 132, 074108 (2010)] SAP potential energy surface (PES), which was built for the CO/Cu(100) system with a rigid surface. In contrast to other molecule-surface coupling models, such as the modified surface oscillator model, the coupling between the adsorbed molecule and the surface atoms is already included in the present expansion SAP-PES model, in which a Taylor expansion around the equilibrium positions of the surface atoms is performed. To test the quality of the Taylor expansion, a direct model, that is avoiding the expansion, is also studied. The latter, however, requests that there is only one movable surface atom included. On the basis of the present expansion and direct models, the effects of a moving top copper atom (the one to which CO is bound) on the energy levels of a bound CO/Cu(100) system are studied. For this purpose, the multiconfiguration time-dependent Hartree calculations are carried out to obtain the vibrational fundamentals and overtones of the CO/Cu(100) system including a movable top copper atom. In order to interpret the results, a simple model consisting of two coupled harmonic oscillators is introduced. From these calculations, the vibrational levels of the CO/Cu(100) system as function of the frequency of the top copper atom are discussed.

    16. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    17. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

      SciTech Connect (OSTI)

      Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

      2009-10-12

      In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

    18. Recent evolution of the offline computing model of the NOvA experiment

      SciTech Connect (OSTI)

      Habig, Alec; Norman, A.; Group, Craig

      2015-12-23

      The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study νe appearance in a νμ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files on either disk or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. In addition, the current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics.

    19. Recent evolution of the offline computing model of the NOvA experiment

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Habig, Alec; Norman, A.; Group, Craig

      2015-12-23

      The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study νe appearance in a νμ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files on either diskmore » or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. In addition, the current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics.« less

    20. Once-through CANDU reactor models for the ORIGEN2 computer code

      SciTech Connect (OSTI)

      Croff, A.G.; Bjerke, M.A.

      1980-11-01

      Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % /sup 235/U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given.

    1. DOE Issues Funding Opportunity for Advanced Computational and Modeling Research for the Electric Power System

      Broader source: Energy.gov [DOE]

      The objective of this Funding Opportunity Announcement (FOA) is to leverage scientific advancements in mathematics and computation for application to power system models and software tools, with the long-term goal of enabling real-time protection and control based on wide-area sensor measurements.

    2. Application of a computational glass model to the shock response of soda-lime glass

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Gorfain, Joshua E.; Key, Christopher T.; Alexander, C. Scott

      2016-04-20

      This article details the implementation and application of the glass-specific computational constitutive model by Holmquist and Johnson [1] to simulate the dynamic response of soda-lime glass under high rate and high pressure shock conditions. The predictive capabilities of this model are assessed through comparison of experimental data with numerical results from computations using the CTH shock physics code. The formulation of this glass model is reviewed in the context of its implementation within CTH. Using a variety of experimental data compiled from the open literature, a complete parameterization of the model describing the observed behavior of soda-lime glass is developed.more » Simulation results using the calibrated soda-lime glass model are compared to flyer plate and Taylor rod impact experimental data covering a range of impact and failure conditions spanning an order of magnitude in velocity and pressure. In conclusion, the complex behavior observed in the experimental testing is captured well in the computations, demonstrating the capability of the glass model within CTH.« less

    3. Technical Review of the CENWP Computational Fluid Dynamics Model of the John Day Dam Forebay

      SciTech Connect (OSTI)

      Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.

      2010-12-01

      The US Army Corps of Engineers Portland District (CENWP) has developed a computational fluid dynamics (CFD) model of the John Day forebay on the Columbia River to aid in the development and design of alternatives to improve juvenile salmon passage at the John Day Project. At the request of CENWP, Pacific Northwest National Laboratory (PNNL) Hydrology Group has conducted a technical review of CENWP's CFD model run in CFD solver software, STAR-CD. PNNL has extensive experience developing and applying 3D CFD models run in STAR-CD for Columbia River hydroelectric projects. The John Day forebay model developed by CENWP is adequately configured and validated. The model is ready for use simulating forebay hydraulics for structural and operational alternatives. The approach and method are sound, however CENWP has identified some improvements that need to be made for future models and for modifications to this existing model.

    4. Psychosocial and Cultural Modeling in Human Computation Systems: A Gamification Approach

      SciTech Connect (OSTI)

      Sanfilippo, Antonio P.; Riensche, Roderick M.; Haack, Jereme N.; Butner, R. Scott

      2013-11-20

      “Gamification”, the application of gameplay to real-world problems, enables the development of human computation systems that support decision-making through the integration of social and machine intelligence. One of gamification’s major benefits includes the creation of a problem solving environment where the influence of cognitive and cultural biases on human judgment can be curtailed through collaborative and competitive reasoning. By reducing biases on human judgment, gamification allows human computation systems to exploit human creativity relatively unhindered by human error. Operationally, gamification uses simulation to harvest human behavioral data that provide valuable insights for the solution of real-world problems.

    5. Vehicle Technologies Office Merit Review 2014: Significant Enhancement of Computational Efficiency in Nonlinear Multiscale Battery Model for Computer Aided Engineering

      Broader source: Energy.gov [DOE]

      Presentation given by NREL at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about significant enhancement of computational...

    6. Computer modeling of electromagnetic edge containment in twin-roll casting

      SciTech Connect (OSTI)

      Chang, F.C.; Turner, L.R.; Hull, J.R.; Wang, Y.H.; Blazek, K.E.

      1998-07-01

      This paper presents modeling studies of magnetohydrodynamics (MHD) analysis in twin-roll casting. Argonne National Laboratory (ANL) and Inland Steel Company have worked together to develop a 3-D computer model that can predict eddy currents, fluid flows, and liquid metal containment for an electromagnetic (EM) edge containment device. This mathematical model can greatly shorten casting research on the use of EM fields for liquid metal containment and control. It can also optimize the existing casting processes and minimize expensive, time-consuming full-scale testing. The model was verified by comparing predictions with experimental results of liquid-metal containment and fluid flow in EM edge dams designed at Inland Steel for twin-roll casting. Numerical simulation was performed by coupling a three-dimensional (3-D) finite-element EM code (ELEKTRA) and a 3-D finite-difference fluids code (CaPS-EM) to solve Maxwell`s equations, Ohm`s law, Navier-Stokes equations, and transport equations of turbulence flow in a casting process that uses EM fields. ELEKTRA is able to predict the eddy-current distribution and electromagnetic forces in complex geometry. CaPS-EM is capable of modeling fluid flows with free-surfaces and dynamic rollers. The computed 3-D magnetic fields and induced eddy currents in ELEKTRA are used as input to flow-field computations in CaPS-EM. Results of the numerical simulation compared well with measurements obtained from both static and dynamic tests.

    7. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

      2014-11-01

      The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation packagemorecapable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).less

    8. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

      SciTech Connect (OSTI)

      Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

      2014-11-01

      The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).

    9. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

      SciTech Connect (OSTI)

      Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

      2011-06-01

      This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

    10. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and...

    11. DualTrust: A Trust Management Model for Swarm-Based Autonomic Computing Systems

      SciTech Connect (OSTI)

      Maiden, Wendy M.

      2010-05-01

      Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm -- their lightweight, ephemeral nature and indirect communication -- make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarm-based autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarm-based autonomic computing system and describes a trust model that meets these requirements.

    12. On the Impact of Execution Models: A Case Study in Computational Chemistry

      SciTech Connect (OSTI)

      Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram; Manzano Franco, Joseph B.; Vishnu, Abhinav; Hoisie, Adolfy

      2015-05-25

      Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancing that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.

    13. Superior model for fault tolerance computation in designing nano-sized circuit systems

      SciTech Connect (OSTI)

      Singh, N. S. S. Muthuvalu, M. S.; Asirvadam, V. S.

      2014-10-24

      As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalization of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.

    14. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

      SciTech Connect (OSTI)

      Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

      2014-07-28

      The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.

    15. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

      2014-07-28

      The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

    16. NREL Computer Models Integrate Wind Turbines with Floating Platforms (Fact Sheet)

      SciTech Connect (OSTI)

      Not Available

      2011-07-01

      Far off the shores of energy-hungry coastal cities, powerful winds blow over the open ocean, where the water is too deep for today's seabed-mounted offshore wind turbines. For the United States to tap into these vast offshore wind energy resources, wind turbines must be mounted on floating platforms to be cost effective. Researchers at the National Renewable Energy Laboratory (NREL) are supporting that development with computer models that allow detailed analyses of such floating wind turbines.

    17. MHK technology developments include current

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      technology developments include current energy conversion (CEC) devices, for example, hydrokinetic turbines that extract power from water currents (riverine, tidal, and ocean) and wave energy conversion (WEC) devices that extract power from wave motion. Sandia's MHK research leverages decades of experience in engineering, design, and analysis of wind power technologies, and its vast research complex, including high- performance computing (HPC), advanced materials and coatings, nondestructive

    18. Spectroscopy, modeling and computation of metal chelate solubility in supercritical CO{sub 2}

      SciTech Connect (OSTI)

      J. F. Brennecke; M. A. Stadtherr

      1999-12-10

      The overall objectives of this project were to gain a fundamental understanding of the solubility and phase behavior of metal chelates in supercritical CO{sub 2}. Extraction with CO{sub 2} is an excellent way to remove organic compounds from soils, sludges and aqueous solutions, and recent research has demonstrated that, together with chelating agents, it is a viable way to remove metals, as well. In this project the authors sought to gain fundamental knowledge that is vital to computing phase behavior, and modeling and designing processes using CO{sub 2} to separate organics and metal compounds from DOE mixed wastes. The overall program was a comprehensive one to measure, model and compute the solubility of metal chelate complexes in supercritical CO{sub 2} and CO{sub 2}/cosolvent mixtures. Through a combination of phase behavior measurements, spectroscopy and the development of a new computational technique, the authors have achieved a completely reliable way to model metal chelate solubility in supercritical CO{sub 2} and CO{sub 2}/co-contaminant mixtures. Thus, they can now design and optimize processes to extract metals from solid matrices using supercritical CO{sub 2}, as an alternative to hazardous organic solvents that create their own environmental problems, even while helping in metals decontamination.

    19. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

      SciTech Connect (OSTI)

      Zhou, Shujia; Duffy, Daniel; Clune, Thomas; Suarez, Max; Williams, Samuel; Halem, Milton

      2009-01-10

      The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratio of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.

    20. Computable General Equilibrium Model Fiscal Year 2013 Capability Development Report - April 2014

      SciTech Connect (OSTI)

      Edwards, Brian Keith; Rivera, Michael K.; Boero, Riccardo

      2014-04-01

      This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

    1. Discretionary Allocation Request | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Fusion Energy, Magnetic Fusion Materials Science, Condensed Matter and Materials Physics ... This may include information such as: - computational methods - programming model - ...

    2. Computer modeling of electromagnetic fields and fluid flows for edge containment in continuous casting

      SciTech Connect (OSTI)

      Chang, F.C.; Hull, J.R.; Wang, Y.H.; Blazek, K.E.

      1996-02-01

      A computer model was developed to predict eddy currents and fluid flows in molten steel. The model was verified by comparing predictions with experimental results of liquid-metal containment and fluid flow in electromagnetic (EM) edge dams (EMDs) designed at Inland Steel for twin-roll casting. The model can optimize the EMD design so it is suitable for application, and minimize expensive, time-consuming full-scale testing. Numerical simulation was performed by coupling a three-dimensional (3-D) finite-element EM code (ELEKTRA) and a 3-D finite-difference fluids code (CaPS-EM) to solve heat transfer, fluid flow, and turbulence transport in a casting process that involves EM fields. ELEKTRA is able to predict the eddy- current distribution and the electromagnetic forces in complex geometries. CaPS-EM is capable of modeling fluid flows with free surfaces. Results of the numerical simulation compared well with measurements obtained from a static test.

    3. An integrated computer modeling environment for regional land use, air quality, and transportation planning

      SciTech Connect (OSTI)

      Hanley, C.J.; Marshall, N.L.

      1997-04-01

      The Land Use, Air Quality, and Transportation Integrated Modeling Environment (LATIME) represents an integrated approach to computer modeling and simulation of land use allocation, travel demand, and mobile source emissions for the Albuquerque, New Mexico, area. This environment provides predictive capability combined with a graphical and geographical interface. The graphical interface shows the causal relationships between data and policy scenarios and supports alternative model formulations. Scenarios are launched from within a Geographic Information System (GIS), and data produced by each model component at each time step within a simulation is stored in the GIS. A menu-driven query system is utilized to review link-based results and regional and area-wide results. These results can also be compared across time or between alternative land use scenarios. Using this environment, policies can be developed and implemented based on comparative analysis, rather than on single-step future projections. 16 refs., 3 figs., 2 tabs.

    4. Computational fluid dynamics modeling of coal gasification in a pressurized spout-fluid bed

      SciTech Connect (OSTI)

      Zhongyi Deng; Rui Xiao; Baosheng Jin; He Huang; Laihong Shen; Qilei Song; Qianjun Li

      2008-05-15

      Computational fluid dynamics (CFD) modeling, which has recently proven to be an effective means of analysis and optimization of energy-conversion processes, has been extended to coal gasification in this paper. A 3D mathematical model has been developed to simulate the coal gasification process in a pressurized spout-fluid bed. This CFD model is composed of gas-solid hydrodynamics, coal pyrolysis, char gasification, and gas phase reaction submodels. The rates of heterogeneous reactions are determined by combining Arrhenius rate and diffusion rate. The homogeneous reactions of gas phase can be treated as secondary reactions. A comparison of the calculated and experimental data shows that most gasification performance parameters can be predicted accurately. This good agreement indicates that CFD modeling can be used for complex fluidized beds coal gasification processes. 37 refs., 7 figs., 5 tabs.

    5. Integrated modeling of CO2 storage and leakage scenarios including transitions between super- and sub-critical conditions, and phase change between liquid and gaseous CO2

      SciTech Connect (OSTI)

      Pruess, K.

      2011-05-15

      Storage of CO{sub 2} in saline aquifers is intended to be at supercritical pressure and temperature conditions, but CO{sub 2} leaking from a geologic storage reservoir and migrating toward the land surface (through faults, fractures, or improperly abandoned wells) would reach subcritical conditions at depths shallower than 500-750 m. At these and shallower depths, subcritical CO{sub 2} can form two-phase mixtures of liquid and gaseous CO{sub 2}, with significant latent heat effects during boiling and condensation. Additional strongly non-isothermal effects can arise from decompression of gas-like subcritical CO{sub 2}, the so-called Joule-Thomson effect. Integrated modeling of CO{sub 2} storage and leakage requires the ability to model non-isothermal flows of brine and CO{sub 2} at conditions that range from supercritical to subcritical, including three-phase flow of aqueous phase, and both liquid and gaseous CO{sub 2}. In this paper, we describe and demonstrate comprehensive simulation capabilities that can cope with all possible phase conditions in brine-CO{sub 2} systems. Our model formulation includes: (1) an accurate description of thermophysical properties of aqueous and CO{sub 2}-rich phases as functions of temperature, pressure, salinity and CO{sub 2} content, including the mutual dissolution of CO{sub 2} and H{sub 2}O; (2) transitions between super- and subcritical conditions, including phase change between liquid and gaseous CO{sub 2}; (3) one-, two-, and three-phase flow of brine-CO{sub 2} mixtures, including heat flow; (4) non-isothermal effects associated with phase change, mutual dissolution of CO{sub 2} and water, and (de-) compression effects; and (5) the effects of dissolved NaCl, and the possibility of precipitating solid halite, with associated porosity and permeability change. Applications to specific leakage scenarios demonstrate that the peculiar thermophysical properties of CO{sub 2} provide a potential for positive as well as negative

    6. Application Of A New Semi-Empirical Model For Forming Limit Prediction Of Sheet Material Including Superposed Loads Of Bending And Shearing

      SciTech Connect (OSTI)

      Held, Christian; Liewald, Mathias; Schleich, Ralf; Sindel, Manfred

      2010-06-15

      The use of lightweight materials offers substantial strength and weight advantages in car body design. Unfortunately such kinds of sheet material are more susceptible to wrinkling, spring back and fracture during press shop operations. For characterization of capability of sheet material dedicated to deep drawing processes in the automotive industry, mainly Forming Limit Diagrams (FLD) are used. However, new investigations at the Institute for Metal Forming Technology have shown that High Strength Steel Sheet Material and Aluminum Alloys show increased formability in case of bending loads are superposed to stretching loads. Likewise, by superposing shearing on in plane uniaxial or biaxial tension formability changes because of materials crystallographic texture. Such mixed stress and strain conditions including bending and shearing effects can occur in deep-drawing processes of complex car body parts as well as subsequent forming operations like flanging. But changes in formability cannot be described by using the conventional FLC. Hence, for purpose of improvement of failure prediction in numerical simulation codes significant failure criteria for these strain conditions are missing. Considering such aspects in defining suitable failure criteria which is easy to implement into FEA a new semi-empirical model has been developed considering the effect of bending and shearing in sheet metals formability. This failure criterion consists of the combination of the so called cFLC (combined Forming Limit Curve), which considers superposed bending load conditions and the SFLC (Shear Forming Limit Curve), which again includes the effect of shearing on sheet metal's formability.

    7. A Computational Model for the Identification of Biochemical Pathways in the Krebs Cycle

      SciTech Connect (OSTI)

      Oliveira, Joseph S.; Bailey, Colin G.; Jones-Oliveira, Janet B.; Dixon, David A.; Gull, Dean W.; Chandler, Mary L.

      2003-03-01

      We have applied an algorithmic methodology which provably decomposes any complex network into a complete family of principal subcircuits to study the minimal circuits that describe the Krebs cycle. Every operational behavior that the network is capable of exhibiting can be represented by some combination of these principal subcircuits and this computational decomposition is linearly efficient. We have developed a computational model that can be applied to biochemical reaction systems which accurately renders pathways of such reactions via directed hypergraphs (Petri nets). We have applied the model to the citric acid cycle (Krebs cycle). The Krebs cycle, which oxidizes the acetyl group of acetyl CoA to CO2 and reduces NAD and FAD to NADH and FADH2 is a complex interacting set of nine subreaction networks. The Krebs cycle was selected because of its familiarity to the biological community and because it exhibits enough complexity to be interesting in order to introduce this novel analytic approach. This study validates the algorithmic methodology for the identification of significant biochemical signaling subcircuits, based solely upon the mathematical model and not upon prior biological knowledge. The utility of the algebraic-combinatorial model for identifying the complete set of biochemical subcircuits as a data set is demonstrated for this important metabolic process.

    8. Modelling of pathologies of the nervous system by the example of computational and electronic models of elementary nervous systems

      SciTech Connect (OSTI)

      Shumilov, V. N. Syryamkin, V. I. Syryamkin, M. V.

      2015-11-17

      The paper puts forward principles of action of devices operating similarly to the nervous system and the brain of biological systems. We propose an alternative method of studying diseases of the nervous system, which may significantly influence prevention, medical treatment, or at least retardation of development of these diseases. This alternative is to use computational and electronic models of the nervous system. Within this approach, we represent the brain in the form of a huge electrical circuit composed of active units, namely, neuron-like units and connections between them. As a result, we created computational and electronic models of elementary nervous systems, which are based on the principles of functioning of biological nervous systems that we have put forward. Our models demonstrate reactions to external stimuli and their change similarly to the behavior of simplest biological organisms. The models possess the ability of self-training and retraining in real time without human intervention and switching operation/training modes. In our models, training and memorization take place constantly under the influence of stimuli on the organism. Training is without any interruption and switching operation modes. Training and formation of new reflexes occur by means of formation of new connections between excited neurons, between which formation of connections is physically possible. Connections are formed without external influence. They are formed under the influence of local causes. Connections are formed between outputs and inputs of two neurons, when the difference between output and input potentials of excited neurons exceeds a value sufficient to form a new connection. On these grounds, we suggest that the proposed principles truly reflect mechanisms of functioning of biological nervous systems and the brain. In order to confirm the correspondence of the proposed principles to biological nature, we carry out experiments for the study of processes of

    9. Use of model calibration to achieve high accuracy in analysis of computer networks

      DOE Patents [OSTI]

      Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

      2004-05-11

      A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

    10. Rapidly re-computable EEG (electroencephalography) forward models for realistic head shapes

      SciTech Connect (OSTI)

      Ermer, J. J.; Mosher, J. C.; Baillet, S.; Leahy, R. M.

      2001-01-01

      Solution of the EEG source localization (inverse) problem utilizing model-based methods typically requires a significant number of forward model evaluations. For subspace based inverse methods like MUSIC [6], the total number of forward model evaluations can often approach an order of 10{sup 3} or 10{sup 4}. Techniques based on least-squares minimization may require significantly more evaluations. The observed set of measurements over an M-sensor array is often expressed as a linear forward spatio-temporal model of the form: F = GQ + N (1) where the observed forward field F (M-sensors x N-time samples) can be expressed in terms of the forward model G, a set of dipole moment(s) Q (3xP-dipoles x N-time samples) and additive noise N. Because of their simplicity, ease of computation, and relatively good accuracy, multi-layer spherical models [7] (or fast approximations described in [1], [7]) have traditionally been the 'forward model of choice' for approximating the human head. However, approximation of the human head via a spherical model does have several key drawbacks. By its very shape, the use of a spherical model distorts the true distribution of passive currents in the skull cavity. Spherical models also require that the sensor positions be projected onto the fitted sphere (Fig. 1), resulting in a distortion of the true sensor-dipole spatial geometry (and ultimately the computed surface potential). The use of a single 'best-fitted' sphere has the added drawback of incomplete coverage of the inner skull region, often ignoring areas such as the frontal cortex. In practice, this problem is typically countered by fitting additional sphere(s) to those region(s) not covered by the primary sphere. The use of these additional spheres results in added complication to the forward model. Using high-resolution spatial information obtained via X-ray CT or MR imaging, a realistic head model can be formed by tessellating the head into a set of contiguous regions (typically the

    11. State-of-the-art review of computational fluid dynamics modeling for fluid-solids systems

      SciTech Connect (OSTI)

      Lyczkowski, R.W.; Bouillard, J.X.; Ding, J.; Chang, S.L.; Burge, S.W.

      1994-05-12

      As the result of 15 years of research (50 staff years of effort) Argonne National Laboratory (ANL), through its involvement in fluidized-bed combustion, magnetohydrodynamics, and a variety of environmental programs, has produced extensive computational fluid dynamics (CFD) software and models to predict the multiphase hydrodynamic and reactive behavior of fluid-solids motions and interactions in complex fluidized-bed reactors (FBRS) and slurry systems. This has resulted in the FLUFIX, IRF, and SLUFIX computer programs. These programs are based on fluid-solids hydrodynamic models and can predict information important to the designer of atmospheric or pressurized bubbling and circulating FBR, fluid catalytic cracking (FCC) and slurry units to guarantee optimum efficiency with minimum release of pollutants into the environment. This latter issue will become of paramount importance with the enactment of the Clean Air Act Amendment (CAAA) of 1995. Solids motion is also the key to understanding erosion processes. Erosion rates in FBRs and pneumatic and slurry components are computed by ANL`s EROSION code to predict the potential metal wastage of FBR walls, intervals, feed distributors, and cyclones. Only the FLUFIX and IRF codes will be reviewed in the paper together with highlights of the validations because of length limitations. It is envisioned that one day, these codes with user-friendly pre and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.

    12. Computational Fluid Dynamics (CFD) Modeling for High Rate Pulverized Coal Injection (PCI) into the Blast Furnace

      SciTech Connect (OSTI)

      Dr. Chenn Zhou

      2008-10-15

      Pulverized coal injection (PCI) into the blast furnace (BF) has been recognized as an effective way to decrease the coke and total energy consumption along with minimization of environmental impacts. However, increasing the amount of coal injected into the BF is currently limited by the lack of knowledge of some issues related to the process. It is therefore important to understand the complex physical and chemical phenomena in the PCI process. Due to the difficulty in attaining trus BF measurements, Computational fluid dynamics (CFD) modeling has been identified as a useful technology to provide such knowledge. CFD simulation is powerful for providing detailed information on flow properties and performing parametric studies for process design and optimization. In this project, comprehensive 3-D CFD models have been developed to simulate the PCI process under actual furnace conditions. These models provide raceway size and flow property distributions. The results have provided guidance for optimizing the PCI process.

    13. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information From here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop. Kerberos. AFS. Printing. Recommended applications for various common tasks. Running CPU- or IO-intensive programs (batch jobs) Commonly encountered problems Computing support within BooNE Bringing a computer to FNAL, or purchasing a new one. Laptops. The Computer Security Program Plan for MiniBooNE The

    14. Enabling a Highly-Scalable Global Address Space Model for Petascale Computing

      SciTech Connect (OSTI)

      Apra, Edoardo; Vetter, Jeffrey S; Yu, Weikuan

      2010-01-01

      Over the past decade, the trajectory to the petascale has been built on increased complexity and scale of the underlying parallel architectures. Meanwhile, software de- velopers have struggled to provide tools that maintain the productivity of computational science teams using these new systems. In this regard, Global Address Space (GAS) programming models provide a straightforward and easy to use addressing model, which can lead to improved produc- tivity. However, the scalability of GAS depends directly on the design and implementation of the runtime system on the target petascale distributed-memory architecture. In this paper, we describe the design, implementation, and optimization of the Aggregate Remote Memory Copy Interface (ARMCI) runtime library on the Cray XT5 2.3 PetaFLOPs computer at Oak Ridge National Laboratory. We optimized our implementation with the flow intimation technique that we have introduced in this paper. Our optimized ARMCI implementation improves scalability of both the Global Arrays (GA) programming model and a real-world chemistry application NWChem from small jobs up through 180,000 cores.

    15. Computational Model of Population Dynamics Based on the Cell Cycle and Local Interactions

      SciTech Connect (OSTI)

      Oprisan, Sorinel Adrian; Oprisan, Ana

      2005-03-31

      Our study bridges cellular (mesoscopic) level interactions and global population (macroscopic) dynamics of carcinoma. The morphological differences and transitions between well and smooth defined benign tumors and tentacular malignat tumors suggest a theoretical analysis of tumor invasion based on the development of mathematical models exhibiting bifurcations of spatial patterns in the density of tumor cells. Our computational model views the most representative and clinically relevant features of oncogenesis as a fight between two distinct sub-systems: the immune system of the host and the neoplastic system. We implemented the neoplastic sub-system using a three-stage cell cycle: active, dormant, and necrosis. The second considered sub-system consists of cytotoxic active (effector) cells -- EC, with a very broad phenotype ranging from NK cells to CTL cells, macrophages, etc. Based on extensive numerical simulations, we correlated the fractal dimensions for carcinoma, which could be obtained from tumor imaging, with the malignat stage. Our computational model was able to also simulate the effects of surgical, chemotherapeutical, and radiotherapeutical treatments.

    16. TIS: an Intelligent Gateway Computer for information and modeling networks. Overview

      SciTech Connect (OSTI)

      Hampel, V.E.; Bailey, C.; Kawin, R.A.; Lann, N.A.; McGrogan, S.K.; Scott, W.S.; Stammers, S.M.; Thomas, J.L.

      1983-08-01

      The Technology Information System (TIS) is being used to develop software for Intelligent Gateway Computers (IGC) suitable for the prototyping of advanced, integrated information networks. Dedicated to information management, TIS leads the user to available information resources, on TIS or elsewhere, by means of a master directory and automated access procedures. Other geographically distributed information centers accessible through TIS include federal and commercial systems like DOE/RECON, NASA/RECON, DOD/DROLS, DOT/TIC, CIS, and DIALOG in the United States, the chemical information systems DARC in France, and DECHEMA in West Germany. New centers are added as required.

    17. Swelling in light water reactor internal components: Insights from computational modeling

      SciTech Connect (OSTI)

      Stoller, Roger E.; Barashev, Alexander V.; Golubov, Stanislav I.

      2015-08-01

      A modern cluster dynamics model has been used to investigate the materials and irradiation parameters that control microstructural evolution under the relatively low-temperature exposure conditions that are representative of the operating environment for in-core light water reactor components. The focus is on components fabricated from austenitic stainless steel. The model accounts for the synergistic interaction between radiation-produced vacancies and the helium that is produced by nuclear transmutation reactions. Cavity nucleation rates are shown to be relatively high in this temperature regime (275 to 325°C), but are sensitive to assumptions about the fine scale microstructure produced under low-temperature irradiation. The cavity nucleation rates observed run counter to the expectation that void swelling would not occur under these conditions. This expectation was based on previous research on void swelling in austenitic steels in fast reactors. This misleading impression arose primarily from an absence of relevant data. The results of the computational modeling are generally consistent with recent data obtained by examining ex-service components. However, it has been shown that the sensitivity of the model s predictions of low-temperature swelling behavior to assumptions about the primary damage source term and specification of the mean-field sink strengths is somewhat greater that that observed at higher temperatures. Further assessment of the mathematical model is underway to meet the long-term objective of this research, which is to provide a predictive model of void swelling at relevant lifetime exposures to support extended reactor operations.

    18. Modeling CO{sub 2}-Brine-Rock Interaction Including Mercury and H{sub 2}S Impurities in the Context of CO{sub 2} Geologic Storage

      SciTech Connect (OSTI)

      Spycher, N.; Oldenburg, C.M.

      2014-01-01

      precipitate from the CO{sub 2} as cinnabar in a zone mostly matching the single-phase CO{sub 2} plume. The precipitation of minerals other than cinnabar, however, dominates the evolution of porosity. Main reactions include the replacement of primarily Fe-chlorite by siderite, of calcite by dolomite, and of K-feldspar by muscovite. Chalcedony is also predicted to precipitate from the dissolution of feldspars and quartz. Although the range of predicted porosity change is quite small, the amount of dissolution and precipitation predicted for these individual minerals is not negligible. These reactive transport simulations assume that Hg gas behaves ideally. To examine effects of non-ideality on these simulations, approximate calculations of the fugacity coefficient of Hg in CO{sub 2} were made. Results suggest that Hg condensation could be significantly overestimated when assuming ideal gas behavior, making our simulation results conservative with respect to impacts on injectivity. The effect of pressure on Henry’s constant for Hg is estimated to yield Hg solubilities about 10% lower than when this effect is not considered, a change that is considered too small to affect the conclusions of this report. Although all results in this study are based on relatively mature data and modeling approaches, in the absence of experimental data and more detailed site-specific information, it is not possible to fully validate the results and conclusions.

    19. Physical and Computational Modeling for Chemical and Biological Weapons Airflow Applications

      SciTech Connect (OSTI)

      McEligot, Donald Marinus; Mc Creery, Glenn Ernest; Pink, Robert John; Barringer, C.; Knight, K. J.

      2002-11-01

      There is a need for information on dispersion and infiltration of chemical and biological agents in complex building environments. A recent collaborative study conducted at the Idaho National Engineering and Environmental Laboratory (INEEL) and Bechtel Corporation Research and Development had the objective of assessing computational fluid dynamics (CFD) models for simulation of flow around complicated buildings through a comparison of experimental and numerical results. The test facility used in the experiments was INEEL’s unique large Matched-Index-of-Refraction (MIR) flow system. The CFD code used for modeling was Fluent, a widely available commercial flow simulation package. For the experiment, a building plan was selected to approximately represent an existing facility. It was found that predicted velocity profiles from above the building and in front of the building were in good agreement with the measurements.

    20. Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT

      SciTech Connect (OSTI)

      Depeursinge, Adrien; Yanagawa, Masahiro; Leung, Ann N.; Rubin, Daniel L.

      2015-04-15

      Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, the proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10{sup −5}). Conclusions: This study

    1. In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation

      SciTech Connect (OSTI)

      G. R. Odette; G. E. Lucas

      2005-11-15

      This final report on "In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation" (DE-FG03-01ER54632) consists of a series of summaries of work that has been published, or presented at meetings, or both. It briefly describes results on the following topics: 1) A Transport and Fate Model for Helium and Helium Management; 2) Atomistic Studies of Point Defect Energetics, Dynamics and Interactions; 3) Multiscale Modeling of Fracture consisting of: 3a) A Micromechanical Model of the Master Curve (MC) Universal Fracture Toughness-Temperature Curve Relation, KJc(T - To), 3b) An Embrittlement DTo Prediction Model for the Irradiation Hardening Dominated Regime, 3c) Non-hardening Irradiation Assisted Thermal and Helium Embrittlement of 8Cr Tempered Martensitic Steels: Compilation and Analysis of Existing Data, 3d) A Model for the KJc(T) of a High Strength NFA MA957, 3e) Cracked Body Size and Geometry Effects of Measured and Effective Fracture Toughness-Model Based MC and To Evaluations of F82H and Eurofer 97, 3-f) Size and Geometry Effects on the Effective Toughness of Cracked Fusion Structures; 4) Modeling the Multiscale Mechanics of Flow Localization-Ductility Loss in Irradiation Damaged BCC Alloys; and 5) A Universal Relation Between Indentation Hardness and True Stress-Strain Constitutive Behavior. Further details can be found in the cited references or presentations that generally can be accessed on the internet, or provided upon request to the authors. Finally, it is noted that this effort was integrated with our base program in fusion materials, also funded by the DOE OFES.

    2. Computational model of collisional-radiative nonequilibrium plasma in an air-driven type laser propulsion

      SciTech Connect (OSTI)

      Ogino, Yousuke; Ohnishi, Naofumi

      2010-05-06

      A thrust power of a gas-driven laser-propulsion system is obtained through interaction with a propellant gas heated by a laser energy. Therefore, understanding the nonequilibrium nature of laser-produced plasma is essential for increasing available thrust force and for improving energy conversion efficiency from a laser to a propellant gas. In this work, a time-dependent collisional-radiative model for air plasma has been developed to study the effects of nonequilibrium atomic and molecular processes on population densities for an air-driven type laser propulsion. Many elementary processes are considered in the number density range of 10{sup 12}/cm{sup 3}<=N<=10{sup 19}/cm{sup 3} and the temperature range of 300 K<=T<=40,000 K. We then compute the unsteady nature of pulsively heated air plasma. When the ionization relaxation time is the same order as the time scale of a heating pulse, the effects of unsteady ionization are important for estimating air plasma states. From parametric computations, we determine the appropriate conditions for the collisional-radiative steady state, local thermodynamic equilibrium, and corona equilibrium models in that density and temperature range.

    3. Pump apparatus including deconsolidator

      DOE Patents [OSTI]

      Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew

      2014-10-07

      A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.

    4. Computational fluid dynamics modeling of two-phase flow in a BWR fuel assembly. Final CRADA Report.

      SciTech Connect (OSTI)

      Tentner, A.; Nuclear Engineering Division

      2009-10-13

      A direct numerical simulation capability for two-phase flows with heat transfer in complex geometries can considerably reduce the hardware development cycle, facilitate the optimization and reduce the costs of testing of various industrial facilities, such as nuclear power plants, steam generators, steam condensers, liquid cooling systems, heat exchangers, distillers, and boilers. Specifically, the phenomena occurring in a two-phase coolant flow in a BWR (Boiling Water Reactor) fuel assembly include coolant phase changes and multiple flow regimes which directly influence the coolant interaction with fuel assembly and, ultimately, the reactor performance. Traditionally, the best analysis tools for this purpose of two-phase flow phenomena inside the BWR fuel assembly have been the sub-channel codes. However, the resolution of these codes is too coarse for analyzing the detailed intra-assembly flow patterns, such as flow around a spacer element. Advanced CFD (Computational Fluid Dynamics) codes provide a potential for detailed 3D simulations of coolant flow inside a fuel assembly, including flow around a spacer element using more fundamental physical models of flow regimes and phase interactions than sub-channel codes. Such models can extend the code applicability to a wider range of situations, which is highly important for increasing the efficiency and to prevent accidents.

    5. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    6. Optical modulator including grapene

      DOE Patents [OSTI]

      Liu, Ming; Yin, Xiaobo; Zhang, Xiang

      2016-06-07

      The present invention provides for a one or more layer graphene optical modulator. In a first exemplary embodiment the optical modulator includes an optical waveguide, a nanoscale oxide spacer adjacent to a working region of the waveguide, and a monolayer graphene sheet adjacent to the spacer. In a second exemplary embodiment, the optical modulator includes at least one pair of active media, where the pair includes an oxide spacer, a first monolayer graphene sheet adjacent to a first side of the spacer, and a second monolayer graphene sheet adjacent to a second side of the spacer, and at least one optical waveguide adjacent to the pair.

    7. Subsurface Multiphase Flow and Multicomponent Reactive Transport Modeling using High-Performance Computing

      SciTech Connect (OSTI)

      Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

      2007-07-16

      Numerical modeling has become a critical tool to the U.S. Department of Energy for evaluating the environmental impact of alternative energy sources and remediation strategies for legacy waste sites. Unfortunately, the physical and chemical complexity of many sites overwhelms the capabilities of even most state of the art groundwater models. Of particular concern are the representation of highly-heterogeneous stratified rock/soil layers in the subsurface and the biological and geochemical interactions of chemical species within multiple fluid phases. Clearly, there is a need for higher-resolution modeling (i.e. more spatial, temporal, and chemical degrees of freedom) and increasingly mechanistic descriptions of subsurface physicochemical processes. We present SciDAC-funded research being performed in the development of PFLOTRAN, a parallel multiphase flow and multicomponent reactive transport model. Written in Fortran90, PFLOTRAN is founded upon PETSc data structures and solvers. We are employing PFLOTRAN in the simulation of uranium transport at the Hanford 300 Area, a contaminated site of major concern to the Department of Energy, the State of Washington, and other government agencies. By leveraging the billions of degrees of freedom available through high-performance computation using tens of thousands of processors, we can better characterize the release of uranium into groundwater and its subsequent transport to the Columbia River, and thereby better understand and evaluate the effectiveness of various proposed remediation strategies.

    8. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    9. Introduction to Focus Issue: Rhythms and Dynamic Transitions in Neurological Disease: Modeling, Computation, and Experiment

      SciTech Connect (OSTI)

      Kaper, Tasso J. Kramer, Mark A.; Rotstein, Horacio G.

      2013-12-15

      Rhythmic neuronal oscillations across a broad range of frequencies, as well as spatiotemporal phenomena, such as waves and bumps, have been observed in various areas of the brain and proposed as critical to brain function. While there is a long and distinguished history of studying rhythms in nerve cells and neuronal networks in healthy organisms, the association and analysis of rhythms to diseases are more recent developments. Indeed, it is now thought that certain aspects of diseases of the nervous system, such as epilepsy, schizophrenia, Parkinson's, and sleep disorders, are associated with transitions or disruptions of neurological rhythms. This focus issue brings together articles presenting modeling, computational, analytical, and experimental perspectives about rhythms and dynamic transitions between them that are associated to various diseases.

    10. Extraction of actinides by multi-dentate diamides and their evaluation with computational molecular modeling

      SciTech Connect (OSTI)

      Sasaki, Y.; Kitatsuji, Y.; Hirata, M.; Kimura, T.; Yoshizuka, K.

      2008-07-01

      Multi-dentate diamides have been synthesized and examined for actinide (An) extractions. Bi- and tridentate extractants are the focus in this work. The extraction of actinides was performed from 0.1-6 M HNO{sub 3} to organic solvents. It was obvious that N,N,N',N'-tetra-alkyl-diglycolamide (DGA) derivatives, 2,2'-(methylimino)bis(N,N-dioctyl-acetamide) (MIDOA), and N,N'-dimethyl-N,N'-dioctyl-2-(3-oxa-pentadecane)-malonamide (DMDOOPDMA) have relatively high D values (D(Pu) > 70). The following notable results using DGA extractants were obtained: (1) DGAs with short alkyl chains give higher D values than those with long alkyl chain, (2) DGAs with long alkyl chain have high solubility in n-dodecane. Computational molecular modeling was also used to elucidate the effects of structural and electronic properties of the reagents on their different extractabilities. (authors)

    11. DualTrust: A Distributed Trust Model for Swarm-Based Autonomic Computing Systems

      SciTech Connect (OSTI)

      Maiden, Wendy M.; Dionysiou, Ioanna; Frincke, Deborah A.; Fink, Glenn A.; Bakken, David E.

      2011-02-01

      For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, trust management is important for the acceptance of the mobile agent sensors and to protect the system from malicious behavior by insiders and entities that have penetrated network defenses. This paper examines the trust relationships, evidence, and decisions in a representative system and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. We then propose the DualTrust conceptual trust model. By addressing the autonomic manager’s bi-directional primary relationships in the ACS architecture, DualTrust is able to monitor the trustworthiness of the autonomic managers, protect the sensor swarm in a scalable manner, and provide global trust awareness for the orchestrating autonomic manager.

    12. A computational model for thermal fluid design analysis of nuclear thermal rockets

      SciTech Connect (OSTI)

      Given, J.A.; Anghaie, S.

      1997-01-01

      A computational model for simulation and design analysis of nuclear thermal propulsion systems has been developed. The model simulates a full-topping expander cycle engine system and the thermofluid dynamics of the core coolant flow, accounting for the real gas properties of the hydrogen propellant/coolant throughout the system. Core thermofluid studies reveal that near-wall heat transfer models currently available may not be applicable to conditions encountered within some nuclear rocket cores. Additionally, the possibility of a core thermal fluid instability at low mass fluxes and the effects of the core power distribution are investigated. Results indicate that for tubular core coolant channels, thermal fluid instability is not an issue within the possible range of operating conditions in these systems. Findings also show the advantages of having a nonflat centrally peaking axial core power profile from a fluid dynamic standpoint. The effects of rocket operating conditions on system performance are also investigated. Results show that high temperature and low pressure operation is limited by core structural considerations, while low temperature and high pressure operation is limited by system performance constraints. The utility of these programs for finding these operational limits, optimum operating conditions, and thermal fluid effects is demonstrated.

    13. Subsurface Multiphase Flow and Multicomponent Reactive Transport Modeling using High-Performance Computing

      SciTech Connect (OSTI)

      Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

      2007-08-01

      Numerical modeling has become a critical tool to the Department of Energy for evaluating the environmental impact of alternative energy sources and remediation strategies for legacy waste sites. Unfortunately, the physical and chemical complexity of many sites overwhelms the capabilities of even most state of the art groundwater models. Of particular concern are the representation of highly-heterogeneous stratified rock/soil layers in the subsurface and the biological and geochemical interactions of chemical species within multiple fluid phases. Clearly, there is a need for higher-resolution modeling (i.e. more spatial, temporal, and chemical degrees of freedom) and increasingly mechanistic descriptions of subsurface physicochemical processes. We present research being performed in the development of PFLOTRAN, a parallel multiphase flow and multicomponent reactive transport model. Written in Fortran90, PFLOTRAN is founded upon PETSc data structures and solvers and has exhibited impressive strong scalability on up to 4000 processors on the ORNL Cray XT3. We are employing PFLOTRAN in the simulation of uranium transport at the Hanford 300 Area, a contaminated site of major concern to the Department of Energy, the State of Washington, and other government agencies where overly-simplistic historical modeling erroneously predicted decade removal times for uranium by ambient groundwater flow. By leveraging the billions of degrees of freedom available through high-performance computation using tens of thousands of processors, we can better characterize the release of uranium into groundwater and its subsequent transport to the Columbia River, and thereby better understand and evaluate the effectiveness of various proposed remediation strategies.

    14. GEO2D - Two-Dimensional Computer Model of a Ground Source Heat Pump System

      DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

      James Menart

      2013-06-07

      This file contains a zipped file that contains many files required to run GEO2D. GEO2D is a computer code for simulating ground source heat pump (GSHP) systems in two-dimensions. GEO2D performs a detailed finite difference simulation of the heat transfer occurring within the working fluid, the tube wall, the grout, and the ground. Both horizontal and vertical wells can be simulated with this program, but it should be noted that the vertical wall is modeled as a single tube. This program also models the heat pump in conjunction with the heat transfer occurring. GEO2D simulates the heat pump and ground loop as a system. Many results are produced by GEO2D as a function of time and position, such as heat transfer rates, temperatures and heat pump performance. On top of this information from an economic comparison between the geothermal system simulated and a comparable air heat pump systems or a comparable gas, oil or propane heating systems with a vapor compression air conditioner. The version of GEO2D in the attached file has been coupled to the DOE heating and cooling load software called ENERGYPLUS. This is a great convenience for the user because heating and cooling loads are an input to GEO2D. GEO2D is a user friendly program that uses a graphical user interface for inputs and outputs. These make entering data simple and they produce many plotted results that are easy to understand. In order to run GEO2D access to MATLAB is required. If this program is not available on your computer you can download the program MCRInstaller.exe, the 64 bit version, from the MATLAB website or from this geothermal depository. This is a free download which will enable you to run GEO2D..

    15. Computational Nanophotonics: modeling optical interactions and transport in tailored nanosystem architectures

      SciTech Connect (OSTI)

      Schatz, George; Ratner, Mark

      2014-02-27

      This report describes research by George Schatz and Mark Ratner that was done over the period 10/03-5/09 at Northwestern University. This research project was part of a larger research project with the same title led by Stephen Gray at Argonne. A significant amount of our work involved collaborations with Gray, and there were many joint publications as summarized later. In addition, a lot of this work involved collaborations with experimental groups at Northwestern, Argonne, and elsewhere. The research was primarily concerned with developing theory and computational methods that can be used to describe the interaction of light with noble metal nanoparticles (especially silver) that are capable of plasmon excitation. Classical electrodynamics provides a powerful approach for performing these studies, so much of this research project involved the development of methods for solving Maxwell’s equations, including both linear and nonlinear effects, and examining a wide range of nanostructures, including particles, particle arrays, metal films, films with holes, and combinations of metal nanostructures with polymers and other dielectrics. In addition, our work broke new ground in the development of quantum mechanical methods to describe plasmonic effects based on the use of time dependent density functional theory, and we developed new theory concerned with the coupling of plasmons to electrical transport in molecular wire structures. Applications of our technology were aimed at the development of plasmonic devices as components of optoelectronic circuits, plasmons for spectroscopy applications, and plasmons for energy-related applications.

    16. Computational Nanophotonics: Model Optical Interactions and Transport in Tailored Nanosystem Architectures

      SciTech Connect (OSTI)

      Stockman, Mark; Gray, Steven

      2014-02-21

      The program is directed toward development of new computational approaches to photoprocesses in nanostructures whose geometry and composition are tailored to obtain desirable optical responses. The emphasis of this specific program is on the development of computational methods and prediction and computational theory of new phenomena of optical energy transfer and transformation on the extreme nanoscale (down to a few nanometers).

    17. Computational Tools for Predictive Modeling of Properties in Complex Actinide Systems

      SciTech Connect (OSTI)

      Autschbach, Jochen; Govind, Niranjan; Atta Fynn, Raymond; Bylaska, Eric J.; Weare, John H.; de Jong, Wibe A.

      2015-03-30

      In this chapter we focus on methodological and computational aspects that are key to accurately modeling the spectroscopic and thermodynamic properties of molecular systems containing actinides within the density functional theory (DFT) framework. Our focus is on properties that require either an accurate relativistic all-electron description or an accurate description of the dynamical behavior of actinide species in an environment at finite temperature, or both. The implementation of the methods and the calculations discussed in this chapter were done with the NWChem software suite (Valiev et al. 2010). In the first two sections we discuss two methods that account for relativistic effects, the ZORA and the X2C Hamiltonian. Section 1.2.1 discusses the implementation of the approximate relativistic ZORA Hamiltonian and its extension to magnetic properties. Section 1.3 focuses on the exact X2C Hamiltonian and the application of this methodology to obtain accurate molecular properties. In Section 1.4 we examine the role of a dynamical environment at finite temperature as well as the presence of other ions on the thermodynamics of hydrolysis and exchange reaction mechanisms. Finally, Section 1.5 discusses the modeling of XAS (EXAFS, XANES) properties in realistic environments accounting for both the dynamics of the system and (for XANES) the relativistic effects.

    18. Wind Turbine Modeling for Computational Fluid Dynamics: December 2010 - December 2012

      SciTech Connect (OSTI)

      Tossas, L. A. M.; Leonardi, S.

      2013-07-01

      With the shortage of fossil fuel and the increasing environmental awareness, wind energy is becoming more and more important. As the market for wind energy grows, wind turbines and wind farms are becoming larger. Current utility-scale turbines extend a significant distance into the atmospheric boundary layer. Therefore, the interaction between the atmospheric boundary layer and the turbines and their wakes needs to be better understood. The turbulent wakes of upstream turbines affect the flow field of the turbines behind them, decreasing power production and increasing mechanical loading. With a better understanding of this type of flow, wind farm developers could plan better-performing, less maintenance-intensive wind farms. Simulating this flow using computational fluid dynamics is one important way to gain a better understanding of wind farm flows. In this study, we compare the performance of actuator disc and actuator line models in producing wind turbine wakes and the wake-turbine interaction between multiple turbines. We also examine parameters that affect the performance of these models, such as grid resolution, the use of a tip-loss correction, and the way in which the turbine force is projected onto the flow field.

    19. Development of Computational Tools for Metabolic Model Curation, Flux Elucidation and Strain Design

      SciTech Connect (OSTI)

      Maranas, Costas D

      2012-05-21

      An overarching goal of the Department of Energy™ mission is the efficient deployment and engineering of microbial and plant systems to enable biomass conversion in pursuit of high energy density liquid biofuels. This has spurred the pace at which new organisms are sequenced and annotated. This torrent of genomic information has opened the door to understanding metabolism in not just skeletal pathways and a handful of microorganisms but for truly genome-scale reconstructions derived for hundreds of microbes and plants. Understanding and redirecting metabolism is crucial because metabolic fluxes are unique descriptors of cellular physiology that directly assess the current cellular state and quantify the effect of genetic engineering interventions. At the same time, however, trying to keep pace with the rate of genomic data generation has ushered in a number of modeling and computational challenges related to (i) the automated assembly, testing and correction of genome-scale metabolic models, (ii) metabolic flux elucidation using labeled isotopes, and (iii) comprehensive identification of engineering interventions leading to the desired metabolism redirection.

    20. Computer modelling of the reduction of rare earth dopants in barium aluminate

      SciTech Connect (OSTI)

      Rezende, Marcos V. dos S; Valerio, Mario E.G.; Jackson, Robert A.

      2011-08-15

      Long lasting phosphorescence in barium aluminates can be achieved by doping with rare earth ions in divalent charge states. The rare earth ions are initially in a trivalent charge state, but are reduced to a divalent charge state before being doped into the material. In this paper, the reduction of trivalent rare earth ions in the BaAl{sub 2}O{sub 4} lattice is studied by computer simulation, with the energetics of the whole reduction and doping process being modelled by two methods, one based on single ion doping and one which allows dopant concentrations to be taken into account. A range of different reduction schemes are considered and the most energetically favourable schemes identified. - Graphical abstract: The doping and subsequent reduction of a rare earth ion into the barium aluminate lattice. Highlights: > The doping of barium aluminate with rare earth ions reduced in a range of atmospheres has been modelled. > The overall solution energy for the doping process for each ion in each reducing atmosphere is calculated using two methods. > The lowest energy reduction process is predicted and compared with experimental results.

    1. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

    2. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

      SciTech Connect (OSTI)

      Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

      2006-10-01

      Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

    3. Noise analysis of genome-scale protein synthesis using a discrete computational model of translation

      SciTech Connect (OSTI)

      Racle, Julien; Hatzimanikatis, Vassily; Stefaniuk, Adam Jan

      2015-07-28

      Noise in genetic networks has been the subject of extensive experimental and computational studies. However, very few of these studies have considered noise properties using mechanistic models that account for the discrete movement of ribosomes and RNA polymerases along their corresponding templates (messenger RNA (mRNA) and DNA). The large size of these systems, which scales with the number of genes, mRNA copies, codons per mRNA, and ribosomes, is responsible for some of the challenges. Additionally, one should be able to describe the dynamics of ribosome exchange between the free ribosome pool and those bound to mRNAs, as well as how mRNA species compete for ribosomes. We developed an efficient algorithm for stochastic simulations that addresses these issues and used it to study the contribution and trade-offs of noise to translation properties (rates, time delays, and rate-limiting steps). The algorithm scales linearly with the number of mRNA copies, which allowed us to study the importance of genome-scale competition between mRNAs for the same ribosomes. We determined that noise is minimized under conditions maximizing the specific synthesis rate. Moreover, sensitivity analysis of the stochastic system revealed the importance of the elongation rate in the resultant noise, whereas the translation initiation rate constant was more closely related to the average protein synthesis rate. We observed significant differences between our results and the noise properties of the most commonly used translation models. Overall, our studies demonstrate that the use of full mechanistic models is essential for the study of noise in translation and transcription.

    4. Improving computer simulations of heat transfer for projecting fenestration products: Using radiation view-factor models

      SciTech Connect (OSTI)

      Griffith, B.; Tuerler, D.; Arasteh, D.K.; Curcija, D.

      1998-10-01

      The window well formed by the concave surface on the warm side of skylights and garden windows can cause surface heat-flow rates to be different for these projecting types of fenestration products than for normal planar windows. Current methods of simulating fenestration thermal conductance (U-factor) use constant boundary condition values for overall surface heat transfer. Simulations that account for local variations in surface heat transfer rates (radiation and convection) may be more accurate for rating and labeling window products whose surfaces project outside a building envelope. This paper, which presents simulation and experimental results for one projecting geometry, is the first step in documenting the importance of these local effects. A generic specimen, called the foam garden window, was used in simulations and experiments to investigate heat transfer of projecting surfaces. Experiments focused on a vertical cross section (measurement plane) located at the middle of the window well on the warm side of the specimen. The specimen was placed between laboratory thermal chambers that were operated at American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) winter heating design conditions. Infrared thermography was used to map surface temperatures. Air temperature and velocity were mapped throughout the measurement plane using a mechanical traversing system. Finite-element computer simulations that directly modeled element-to-element radiation were better able to match experimental data than simulations that used fixed coefficients for total surface heat transfer. Air conditions observed in the window well suggest that localized convective effects were the reason for the difference between actual and modeled surface temperatures. U-value simulation results were 5% to 10% lower when radiation was modeled directly.

    5. DEVELOPMENT OF A COMPUTATIONAL MULTIPHASE FLOW MODEL FOR FISCHER TROPSCH SYNTHESIS IN A SLURRY BUBBLE COLUMN REACTOR

      SciTech Connect (OSTI)

      Donna Post Guillen; Tami Grimmett; Anastasia M. Gribik; Steven P. Antal

      2010-09-01

      The Hybrid Energy Systems Testing (HYTEST) Laboratory is being established at the Idaho National Laboratory to develop and test hybrid energy systems with the principal objective to safeguard U.S. Energy Security by reducing dependence on foreign petroleum. A central component of the HYTEST is the slurry bubble column reactor (SBCR) in which the gas-to-liquid reactions will be performed to synthesize transportation fuels using the Fischer Tropsch (FT) process. SBCRs are cylindrical vessels in which gaseous reactants (for example, synthesis gas or syngas) is sparged into a slurry of liquid reaction products and finely dispersed catalyst particles. The catalyst particles are suspended in the slurry by the rising gas bubbles and serve to promote the chemical reaction that converts syngas to a spectrum of longer chain hydrocarbon products, which can be upgraded to gasoline, diesel or jet fuel. These SBCRs operate in the churn-turbulent flow regime which is characterized by complex hydrodynamics, coupled with reacting flow chemistry and heat transfer, that effect reactor performance. The purpose of this work is to develop a computational multiphase fluid dynamic (CMFD) model to aid in understanding the physico-chemical processes occurring in the SBCR. Our team is developing a robust methodology to couple reaction kinetics and mass transfer into a four-field model (consisting of the bulk liquid, small bubbles, large bubbles and solid catalyst particles) that includes twelve species: (1) CO reactant, (2) H2 reactant, (3) hydrocarbon product, and (4) H2O product in small bubbles, large bubbles, and the bulk fluid. Properties of the hydrocarbon product were specified by vapor liquid equilibrium calculations. The absorption and kinetic models, specifically changes in species concentrations, have been incorporated into the mass continuity equation. The reaction rate is determined based on the macrokinetic model for a cobalt catalyst developed by Yates and Satterfield [1]. The

    6. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

      SciTech Connect (OSTI)

      Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei

      2015-01-15

      Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

    7. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

      SciTech Connect (OSTI)

      Joseph, Earl C.; Conway, Steve; Dekate, Chirag

      2013-09-30

      This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size.  A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

    8. Mira Early Science Program | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC architectures. Together, the 16 projects span a diverse range of scientific fields, numerical methods, programming models, and computational approaches. The latter include...

    9. Computational Modeling of Fluid Flow through a Fracture in Permeable Rock

      SciTech Connect (OSTI)

      Crandall, Dustin; Ahmadi, Goodarz; Smith, Duane H

      2010-01-01

      Laminar, single-phase, finite-volume solutions to the Navier–Stokes equations of fluid flow through a fracture within permeable media have been obtained. The fracture geometry was acquired from computed tomography scans of a fracture in Berea sandstone, capturing the small-scale roughness of these natural fluid conduits. First, the roughness of the two-dimensional fracture profiles was analyzed and shown to be similar to Brownian fractal structures. The permeability and tortuosity of each fracture profile was determined from simulations of fluid flow through these geometries with impermeable fracture walls. A surrounding permeable medium, assumed to obey Darcy’s Law with permeabilities from 0.2 to 2,000 millidarcies, was then included in the analysis. A series of simulations for flows in fractured permeable rocks was performed, and the results were used to develop a relationship between the flow rate and pressure loss for fractures in porous rocks. The resulting frictionfactor, which accounts for the fracture geometric properties, is similar to the cubic law; it has the potential to be of use in discrete fracture reservoir-scale simulations of fluid flow through highly fractured geologic formations with appreciable matrix permeability. The observed fluid flow from the surrounding permeable medium to the fracture was significant when the resistance within the fracture and the medium were of the same order. An increase in the volumetric flow rate within the fracture profile increased by more than 5% was observed for flows within high permeability-fractured porous media.

    10. Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA

      SciTech Connect (OSTI)

      Carbajo, Juan J; Qualls, A L

      2008-01-01

      The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE will generate over 6 kWe; the excess power will be needed for the pumps and other power management devices. The reactor will be cooled by NaK (a eutectic mixture of sodium and potassium which is liquid at ambient temperature). This space reactor is intended to be deployed over the surface of the Moon or Mars. The reactor operating life will be 8 to 10 years. The RELAP5-3D/ATHENA code is being developed and maintained by Idaho National Laboratory. The code can employ a variety of coolants in addition to water, the original coolant employed with early versions of the code. The code can also use 3-D volumes and 3-D junctions, thus allowing for more realistic representation of complex geometries. A combination of 3-D and 1-D volumes is employed in this study. The space reactor model consists of a primary loop and two secondary loops connected by two heat exchangers (HXs). Each secondary loop provides heat to four SEs. The primary loop includes the nuclear reactor with the lower and upper plena, the core with 85 fuel pins, and two vertical heat exchangers (HX). The maximum coolant temperature of the primary loop is 900 K. The secondary loops also employ NaK as a coolant at a maximum temperature of 877 K. The SEs heads are at a temperature of 800 K and the cold sinks are at a temperature of ~400 K. Two radiators will be employed to remove heat from the SEs. The SE HXs surrounding the SE heads are of annular design and have been modeled using 3-D volumes. These 3-D models have been used to improve the HX design by optimizing the flows of coolant and maximizing the heat transferred to the SE heads. The transients analyzed include failure of one or more Stirling engines, trip of the reactor pump, and trips of the secondary loop pumps feeding the HXs of the

    11. Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop

      SciTech Connect (OSTI)

      Musial, W.; Lawson, M.; Rooney, S.

      2013-02-01

      The Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop was hosted by the National Renewable Energy Laboratory (NREL) in Broomfield, Colorado, July 9–10, 2012. The workshop brought together over 60 experts in marine energy technologies to disseminate technical information to the marine energy community, and to collect information to help identify ways in which the development of a commercially viable marine energy industry can be accelerated. The workshop was comprised of plenary sessions that reviewed the state of the marine energy industry and technical sessions that covered specific topics of relevance. Each session consisted of presentations, followed by facilitated discussions. During the facilitated discussions, the session chairs posed several prepared questions to the presenters and audience to encourage communication and the exchange of ideas between technical experts. Following the workshop, attendees were asked to provide written feedback on their takeaways from the workshop and their best ideas on how to accelerate the pace of marine energy technology development. The first four sections of this document give a general overview of the workshop format, provide presentation abstracts, supply discussion session notes, and list responses to the post-workshop questions. The final section presents key findings and conclusions from the workshop that suggest what the most pressing MHK technology needs are and how the U.S. Department of Energy (DOE) and national laboratory resources can be utilized to assist the marine energy industry in the most effective manner.

    12. Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop

      SciTech Connect (OSTI)

      Musial, W.; Lawson, M.; Rooney, S.

      2013-02-01

      The Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop was hosted by the National Renewable Energy Laboratory (NREL) in Broomfield, Colorado, July 9-10, 2012. The workshop brought together over 60 experts in marine energy technologies to disseminate technical information to the marine energy community and collect information to help identify ways in which the development of a commercially viable marine energy industry can be accelerated. The workshop was comprised of plenary sessions that reviewed the state of the marine energy industry and technical sessions that covered specific topics of relevance. Each session consisted of presentations, followed by facilitated discussions. During the facilitated discussions, the session chairs posed several prepared questions to the presenters and audience to encourage communication and the exchange of ideas between technical experts. Following the workshop, attendees were asked to provide written feedback on their takeaways and their best ideas on how to accelerate the pace of marine energy technology development. The first four sections of this document give a general overview of the workshop format, provide presentation abstracts and discussion session notes, and list responses to the post-workshop questions. The final section presents key findings and conclusions from the workshop that suggest how the U.S. Department of Energy and national laboratory resources can be utilized to most effectively assist the marine energy industry.

    13. Natural Abundance 17O Nuclear Magnetic Resonance and Computational Modeling Studies of Lithium Based Liquid Electrolytes

      SciTech Connect (OSTI)

      Deng, Xuchu; Hu, Mary Y.; Wei, Xiaoliang; Wang, Wei; Chen, Zhong; Liu, Jun; Hu, Jian Z.

      2015-07-01

      Natural abundance 17O NMR measurements were conducted on electrolyte solutions consisting of Li[CF3SO2NSO2CF3] (LiTFSI) dissolved in the solvents of ethylene carbonate (EC), propylene carbonate (PC), ethyl methyl carbonate (EMC), and their mixtures at various concentrations. It was observed that 17O chemical shifts of solvent molecules change with the concentration of LiTFSI. The chemical shift displacements of carbonyl oxygen are evidently greater than those of ethereal oxygen, strongly indicating that Li+ ion is coordinated with carbonyl oxygen rather than ethereal oxygen. To understand the detailed molecular interaction, computational modeling of 17O chemical shifts was carried out on proposed solvation structures. By comparing the predicted chemical shifts with the experimental values, it is found that a Li+ ion is coordinated with four double bond oxygen atoms from EC, PC, EMC and TFSI- anion. In the case of excessive amount of solvents of EC, PC and EMC the Li+ coordinated solvent molecules are undergoing quick exchange with bulk solvent molecules, resulting in average 17O chemical shifts. Several kinds of solvation structures are identified, where the proportion of each structure in the liquid electrolytes investigated depends on the concentration of LiTFSI.

    14. Transfer matrix computation of critical polynomials for two-dimensional Potts models

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Jacobsen, Jesper Lykke; Scullard, Christian R.

      2013-02-04

      We showed, In our previous work, that critical manifolds of the q-state Potts model can be studied by means of a graph polynomial PB(q, v), henceforth referred to as the critical polynomial. This polynomial may be defined on any periodic two-dimensional lattice. It depends on a finite subgraph B, called the basis, and the manner in which B is tiled to construct the lattice. The real roots v = eK — 1 of PB(q, v) either give the exact critical points for the lattice, or provide approximations that, in principle, can be made arbitrarily accurate by increasing the size ofmore » B in an appropriate way. In earlier work, PB(q, v) was defined by a contraction-deletion identity, similar to that satisfied by the Tutte polynomial. Here, we give a probabilistic definition of PB(q, v), which facilitates its computation, using the transfer matrix, on much larger B than was previously possible.We present results for the critical polynomial on the (4, 82), kagome, and (3, 122) lattices for bases of up to respectively 96, 162, and 243 edges, compared to the limit of 36 edges with contraction-deletion. We discuss in detail the role of the symmetries and the embedding of B. The critical temperatures vc obtained for ferromagnetic (v > 0) Potts models are at least as precise as the best available results from Monte Carlo simulations or series expansions. For instance, with q = 3 we obtain vc(4, 82) = 3.742 489 (4), vc(kagome) = 1.876 459 7 (2), and vc(3, 122) = 5.033 078 49 (4), the precision being comparable or superior to the best simulation results. More generally, we trace the critical manifolds in the real (q, v) plane and discuss the intricate structure of the phase diagram in the antiferromagnetic (v < 0) region.« less

    15. Computer modeling of electrical and thermal performance during bipolar pulsed radiofrequency for pain relief

      SciTech Connect (OSTI)

      Pérez, Juan J.; Pérez-Cajaraville, Juan J.; Muñoz, Víctor; Berjano, Enrique

      2014-07-15

      Purpose: Pulsed RF (PRF) is a nonablative technique for treating neuropathic pain. Bipolar PRF application is currently aimed at creating a “strip lesion” to connect the electrode tips; however, the electrical and thermal performance during bipolar PRF is currently unknown. The objective of this paper was to study the temperature and electric field distributions during bipolar PRF. Methods: The authors developed computer models to study temperature and electric field distributions during bipolar PRF and to assess the possible ablative thermal effect caused by the accumulated temperature spikes, along with any possible electroporation effects caused by the electrical field. The authors also modeled the bipolar ablative mode, known as bipolar Continuous Radiofrequency (CRF), in order to compare both techniques. Results: There were important differences between CRF and PRF in terms of electrical and thermal performance. In bipolar CRF: (1) the initial temperature of the tissue impacts on temperature progress and hence on the thermal lesion dimension; and (2) at 37 °C, 6-min of bipolar CRF creates a strip thermal lesion between the electrodes when these are separated by a distance of up to 20 mm. In bipolar PRF: (1) an interelectrode distance shorter than 5 mm produces thermal damage (i.e., ablative effect) in the intervening tissue after 6 min of bipolar RF; and (2) the possible electroporation effect (electric fields higher than 150 kV m{sup −1}) would be exclusively circumscribed to a very small zone of tissue around the electrode tip. Conclusions: The results suggest that (1) the clinical parameters considered to be suitable for bipolar CRF should not necessarily be considered valid for bipolar PRF, and vice versa; and (2) the ablative effect of the CRF mode is mainly due to its much greater level of delivered energy than is the case in PRF, and therefore at same applied energy levels, CRF, and PRF are expected to result in same outcomes in terms of

    16. Automotive Underhood Thermal Management Analysis Using 3-D Coupled Thermal-Hydrodynamic Computer Models: Thermal Radiation Modeling

      SciTech Connect (OSTI)

      Pannala, S; D'Azevedo, E; Zacharia, T

      2002-02-26

      The goal of the radiation modeling effort was to develop and implement a radiation algorithm that is fast and accurate for the underhood environment. As part of this CRADA, a net-radiation model was chosen to simulate radiative heat transfer in an underhood of a car. The assumptions (diffuse-gray and uniform radiative properties in each element) reduce the problem tremendously and all the view factors for radiation thermal calculations can be calculated once and for all at the beginning of the simulation. The cost for online integration of heat exchanges due to radiation is found to be less than 15% of the baseline CHAD code and thus very manageable. The off-line view factor calculation is constructed to be very modular and has been completely integrated to read CHAD grid files and the output from this code can be read into the latest version of CHAD. Further integration has to be performed to accomplish the same with STAR-CD. The main outcome of this effort is to obtain a highly scalable and portable simulation capability to model view factors for underhood environment (for e.g. a view factor calculation which took 14 hours on a single processor only took 14 minutes on 64 processors). The code has also been validated using a simple test case where analytical solutions are available. This simulation capability gives underhood designers in the automotive companies the ability to account for thermal radiation - which usually is critical in the underhood environment and also turns out to be one of the most computationally expensive components of underhood simulations. This report starts off with the original work plan as elucidated in the proposal in section B. This is followed by Technical work plan to accomplish the goals of the project in section C. In section D, background to the current work is provided with references to the previous efforts this project leverages on. The results are discussed in section 1E. This report ends with conclusions and future scope of

    17. Kinetic analysis of the phenyl-shift reaction in $\\beta$-O-4 lignin model compounds: A computational study.

      SciTech Connect (OSTI)

      Beste, Ariana; Buchanan III, A C

      2011-01-01

      The phenyl-shift reaction in $\\beta$-phenethyl phenyl ether ($\\beta - \\rm PhCH_2CH_2OPh$, $\\beta$-PPE) is an integral step in the pyrolysis of PPE, which is a model compound for the $\\beta$-O-4 linkage in lignin. We investigated the influence of natural occurring substituents (hydroxy, methoxy) on the reaction rate by calculating relative rate constant using density functional theory in combination with transition state theory, including anharmonic correction for low-frequency modes. The phenyl-shift reaction proceeds through an intermediate and the overall rate constants were computed invoking the steady-state approximation (its validity was confirmed). Substituents on the phenethyl group have only little influence on the rate constants. If a methoxy substituent is located in para position of the phenyl ring adjacent to the ether oxygen, the energies of the intermediate and second transition state are lowered, but the overall rate constant is not significantly altered. This is a consequence of the dominating first transition from pre-complex to intermediate in the overall rate constant. {\\it O}- and di-{\\it o}-methoxy substituents accelerate the phenyl-migration rate compared to $\\beta$-PPE.

    18. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      DesignForward FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Exascale Computing Exascale Computing Moving forward into the exascale era, NERSC users place will place increased demands on NERSC computational facilities. Users will be facing increased complexity in the memory subsystem and node architecture. System designs and programming models will have to evolve to face these new challenges. NERSC staff are active in current initiatives addressing

    19. Modeling Pancreatic Tumor Motion Using 4-Dimensional Computed Tomography and Surrogate Markers

      SciTech Connect (OSTI)

      Huguet, Florence; Yorke, Ellen D.; Davidson, Margaret; Zhang, Zhigang; Jackson, Andrew; Mageras, Gig S.; Wu, Abraham J.; Goodman, Karyn A.

      2015-03-01

      Purpose: To assess intrafractional positional variations of pancreatic tumors using 4-dimensional computed tomography (4D-CT), their impact on gross tumor volume (GTV) coverage, the reliability of biliary stent, fiducial seeds, and the real-time position management (RPM) external marker as tumor surrogates for setup of respiratory gated treatment, and to build a correlative model of tumor motion. Methods and Materials: We analyzed the respiration-correlated 4D-CT images acquired during simulation of 36 patients with either a biliary stent (n=16) or implanted fiducials (n=20) who were treated with RPM respiratory gated intensity modulated radiation therapy for locally advanced pancreatic cancer. Respiratory displacement relative to end-exhalation was measured for the GTV, the biliary stent, or fiducial seeds, and the RPM marker. The results were compared between the full respiratory cycle and the gating interval. Linear mixed model was used to assess the correlation of GTV motion with the potential surrogate markers. Results: The average ± SD GTV excursions were 0.3 ± 0.2 cm in the left-right direction, 0.6 ± 0.3 cm in the anterior-posterior direction, and 1.3 ± 0.7 cm in the superior-inferior direction. Gating around end-exhalation reduced GTV motion by 46% to 60%. D95% was at least the prescribed 56 Gy in 76% of patients. GTV displacement was associated with the RPM marker, the biliary stent, and the fiducial seeds. The correlation was better with fiducial seeds and with biliary stent. Conclusions: Respiratory gating reduced the margin necessary for radiation therapy for pancreatic tumors. GTV motion was well correlated with biliary stent or fiducial seed displacements, validating their use as surrogates for daily assessment of GTV position during treatment. A patient-specific internal target volume based on 4D-CT is recommended both for gated and not-gated treatment; otherwise, our model can be used to predict the degree of GTV motion.

    20. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

      SciTech Connect (OSTI)

      Jablonowski, Christiane

      2015-07-14

      The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project

    1. Computational modeling predicts simultaneous targeting of fibroblasts and epithelial cells is necessary for treatment of pulmonary fibrosis

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Warsinske, Hayley C.; Wheaton, Amanda K.; Kim, Kevin K.; Linderman, Jennifer J.; Moore, Bethany B.; Kirschner, Denise E.

      2016-06-23

      Pulmonary fibrosis is pathologic remodeling of lung tissue that can result in difficulty breathing, reduced quality of life, and a poor prognosis for patients. Fibrosis occurs as a result of insult to lung tissue, though mechanisms of this response are not well-characterized. The disease is driven in part by dysregulation of fibroblast proliferation and differentiation into myofibroblast cells, as well as pro-fibrotic mediator-driven epithelial cell apoptosis. The most well-characterized pro-fibrotic mediator associated with pulmonary fibrosis is TGF-β1. Excessive synthesis of, and sensitivity to, pro-fibrotic mediators as well as insufficient production of and sensitivity to anti-fibrotic mediators has been credited withmore » enabling fibroblast accumulation. Available treatments neither halt nor reverse lung damage. In this study we have two aims: to identify molecular and cellular scale mechanisms driving fibroblast proliferation and differentiation as well as epithelial cell survival in the context of fibrosis, and to predict therapeutic targets and strategies. We combine in vitro studies with a multi-scale hybrid agent-based computational model that describes fibroblasts and epithelial cells in co-culture. Within this model TGF-β1 represents a pro-fibrotic mediator and we include detailed dynamics of TGFβ1 receptor ligand signaling in fibroblasts. PGE2 represents an anti-fibrotic mediator. Using uncertainty and sensitivity analysis we identify TGF-β1 synthesis, TGF-β1 activation, and PGE2 synthesis among the key mechanisms contributing to fibrotic outcomes. We further demonstrate that intervention strategies combining potential therapeutics targeting both fibroblast regulation and epithelial cell survival can promote healthy tissue repair better than individual strategies. Combinations of existing drugs and compounds may provide significant improvements to the current standard of care for pulmonary fibrosis. In conclusion, a two-hit therapeutic

    2. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

      SciTech Connect (OSTI)

      Johanna H Oxstrand; Katya L Le Blanc

      2012-07-01

      The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

    3. Computing and Computational Sciences Directorate - Computer Science and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics Division Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Our mission includes basic research in computational sciences and application of advanced computing systems, computational, mathematical and analysis techniques to the solution of scientific problems of national importance. We seek to work

    4. LIAR -- A computer program for the modeling and simulation of high performance linacs

      SciTech Connect (OSTI)

      Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

      1997-04-01

      The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm.

    5. MODELING STRATEGIES TO COMPUTE NATURAL CIRCULATION USING CFD IN A VHTR AFTER A LOFA

      SciTech Connect (OSTI)

      Yu-Hsin Tung; Richard W. Johnson; Ching-Chang Chieng; Yuh-Ming Ferng

      2012-11-01

      A prismatic gas-cooled very high temperature reactor (VHTR) is being developed under the next generation nuclear plant program (NGNP) of the U.S. Department of Energy, Office of Nuclear Energy. In the design of the prismatic VHTR, hexagonal shaped graphite blocks are drilled to allow insertion of fuel pins, made of compacted TRISO fuel particles, and coolant channels for the helium coolant. One of the concerns for the reactor design is the effects of a loss of flow accident (LOFA) where the coolant circulators are lost for some reason, causing a loss of forced coolant flow through the core. In such an event, it is desired to know what happens to the (reduced) heat still being generated in the core and if it represents a problem for the fuel compacts, the graphite core or the reactor vessel (RV) walls. One of the mechanisms for the transport of heat out of the core is by the natural circulation of the coolant, which is still present. That is, how much heat may be transported by natural circulation through the core and upwards to the top of the upper plenum? It is beyond current capability for a computational fluid dynamic (CFD) analysis to perform a calculation on the whole RV with a sufficiently refined mesh to examine the full potential of natural circulation in the vessel. The present paper reports the investigation of several strategies to model the flow and heat transfer in the RV. It is found that it is necessary to employ representative geometries of the core to estimate the heat transfer. However, by taking advantage of global and local symmetries, a detailed estimate of the strength of the resulting natural circulation and the level of heat transfer to the top of the upper plenum is obtained.

    6. BEAM: A computational workflow system for managing and modeling material characterization data in HPC environments

      SciTech Connect (OSTI)

      Lingerfelt, Eric J; Endeve, Eirik; Ovchinnikov, Oleg S; Borreguero Calvo, Jose M; Park, Byung H; Archibald, Richard K; Symons, Christopher T; Kalinin, Sergei V; Messer, Bronson; Shankar, Mallikarjun; Jesse, Stephen

      2016-01-01

      Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now with the rise of multimodal acquisition systems and the associated processing capability the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalable data analysis and simulation via an intuitive, cross-platform client user interface. This framework delivers authenticated, push-button execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing the converged compute-and-data infrastructure at Oak Ridge National Laboratory s (ORNL) Compute and Data Environment for Science (CADES) and HPC environments like Titan at the Oak Ridge Leadership Computing Facility (OLCF). In this work we address the underlying HPC needs for characterization in the material science community, elaborate how BEAM s design and infrastructure tackle those needs, and present a small sub-set of user cases where scientists utilized BEAM across a broad range of analytical techniques and analysis modes.

    7. KINETIC MODELING OF A FISCHER-TROPSCH REACTION OVER A COBALT CATALYST IN A SLURRY BUBBLE COLUMN REACTOR FOR INCORPORATION INTO A COMPUTATIONAL MULTIPHASE FLUID DYNAMICS MODEL

      SciTech Connect (OSTI)

      Anastasia Gribik; Doona Guillen, PhD; Daniel Ginosar, PhD

      2008-09-01

      Currently multi-tubular fixed bed reactors, fluidized bed reactors, and slurry bubble column reactors (SBCRs) are used in commercial Fischer Tropsch (FT) synthesis. There are a number of advantages of the SBCR compared to fixed and fluidized bed reactors. The main advantage of the SBCR is that temperature control and heat recovery are more easily achieved. The SBCR is a multiphase chemical reactor where a synthesis gas, comprised mainly of H2 and CO, is bubbled through a liquid hydrocarbon wax containing solid catalyst particles to produce specialty chemicals, lubricants, or fuels. The FT synthesis reaction is the polymerization of methylene groups [-(CH2)-] forming mainly linear alkanes and alkenes, ranging from methane to high molecular weight waxes. The Idaho National Laboratory is developing a computational multiphase fluid dynamics (CMFD) model of the FT process in a SBCR. This paper discusses the incorporation of absorption and reaction kinetics into the current hydrodynamic model. A phased approach for incorporation of the reaction kinetics into a CMFD model is presented here. Initially, a simple kinetic model is coupled to the hydrodynamic model, with increasing levels of complexity added in stages. The first phase of the model includes incorporation of the absorption of gas species from both large and small bubbles into the bulk liquid phase. The driving force for the gas across the gas liquid interface into the bulk liquid is dependent upon the interfacial gas concentration in both small and large bubbles. However, because it is difficult to measure the concentration at the gas-liquid interface, coefficients for convective mass transfer have been developed for the overall driving force between the bulk concentrations in the gas and liquid phases. It is assumed that there are no temperature effects from mass transfer of the gas phases to the bulk liquid phase, since there are only small amounts of dissolved gas in the liquid phase. The product from the

    8. International Nuclear Energy Research Initiative Development of Computational Models for Pyrochemical Electrorefiners of Nuclear Waste Transmutation Systems

      SciTech Connect (OSTI)

      M.F. Simpson; K.-R. Kim

      2010-12-01

      In support of closing the nuclear fuel cycle using non-aqueous separations technology, this project aims to develop computational models of electrorefiners based on fundamental chemical and physical processes. Spent driver fuel from Experimental Breeder Reactor-II (EBR-II) is currently being electrorefined in the Fuel Conditioning Facility (FCF) at Idaho National Laboratory (INL). And Korea Atomic Energy Research Institute (KAERI) is developing electrorefining technology for future application to spent fuel treatment and management in the Republic of Korea (ROK). Electrorefining is a critical component of pyroprocessing, a non-aqueous chemical process which separates spent fuel into four streams: (1) uranium metal, (2) U/TRU metal, (3) metallic high-level waste containing cladding hulls and noble metal fission products, and (4) ceramic high-level waste containing sodium and active metal fission products. Having rigorous yet flexible electrorefiner models will facilitate process optimization and assist in trouble-shooting as necessary. To attain such models, INL/UI has focused on approaches to develop a computationally-light and portable two-dimensional (2D) model, while KAERI/SNU has investigated approaches to develop a computationally intensive three-dimensional (3D) model for detailed and fine-tuned simulation.

    9. Inline CBET Model Including SRS Backscatter

      SciTech Connect (OSTI)

      Bailey, David S.

      2015-06-26

      Cross-beam energy transfer (CBET) has been used as a tool on the National Ignition Facility (NIF) since the first energetics experiments in 2009 to control the energy deposition in ignition hohlraums and tune the implosion symmetry. As large amounts of power are transferred between laser beams at the entrance holes of NIF hohlraums, the presence of many overlapping beat waves can lead to stochastic ion heating in the regions where laser beams overlap [P. Michel et al., Phys. Rev. Lett. 109, 195004 (2012)]. Using the CBET gains derived in this paper, we show how to implement these equations in a ray-based laser source for a rad-hydro code.

    10. User's guide for SAMMY: a computer model for multilevel r-matrix fits to neutron data using Bayes' equations

      SciTech Connect (OSTI)

      Larson, N. M.; Perey, F. G.

      1980-11-01

      A method is described for determining the parameters of a model from experimental data based upon the utilization of Bayes' theorem. This method has several advantages over the least-squares method as it is commonly used; one important advantage is that the assumptions under which the parameter values have been determined are more clearly evident than in many results based upon least squares. Bayes' method has been used to develop a computer code which can be utilized to analyze neutron cross-section data by means of the R-matrix theory. The required formulae from the R-matrix theory are presented, and the computer implementation of both Bayes' equations and R-matrix theory is described. Details about the computer code and compelte input/output information are given.

    11. BLENDING STUDY FOR SRR SALT DISPOSITION INTEGRATION: TANK 50H SCALE-MODELING AND COMPUTER-MODELING FOR BLENDING PUMP DESIGN, PHASE 2

      SciTech Connect (OSTI)

      Leishear, R.; Poirier, M.; Fowley, M.

      2011-05-26

      The Salt Disposition Integration (SDI) portfolio of projects provides the infrastructure within existing Liquid Waste facilities to support the startup and long term operation of the Salt Waste Processing Facility (SWPF). Within SDI, the Blend and Feed Project will equip existing waste tanks in the Tank Farms to serve as Blend Tanks where 300,000-800,000 gallons of salt solution will be blended in 1.3 million gallon tanks and qualified for use as feedstock for SWPF. Blending requires the miscible salt solutions from potentially multiple source tanks per batch to be well mixed without disturbing settled sludge solids that may be present in a Blend Tank. Disturbing solids may be problematic both from a feed quality perspective as well as from a process safety perspective where hydrogen release from the sludge is a potential flammability concern. To develop the necessary technical basis for the design and operation of blending equipment, Savannah River National Laboratory (SRNL) completed scaled blending and transfer pump tests and computational fluid dynamics (CFD) modeling. A 94 inch diameter pilot-scale blending tank, including tank internals such as the blending pump, transfer pump, removable cooling coils, and center column, were used in this research. The test tank represents a 1/10.85 scaled version of an 85 foot diameter, Type IIIA, nuclear waste tank that may be typical of Blend Tanks used in SDI. Specifically, Tank 50 was selected as the tank to be modeled per the SRR, Project Engineering Manager. SRNL blending tests investigated various fixed position, non-rotating, dual nozzle pump designs, including a blending pump model provided by the blend pump vendor, Curtiss Wright (CW). Primary research goals were to assess blending times and to evaluate incipient sludge disturbance for waste tanks. Incipient sludge disturbance was defined by SRR and SRNL as minor blending of settled sludge from the tank bottom into suspension due to blending pump operation, where

    12. Computing Resources | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Computing Resources Theory and Computing Sciences Building Argonne's Theory and Computing Sciences (TCS) building houses a wide variety of computing systems including some of the most powerful supercomputers in the world. The facility has 25,000 square feet of raised computer floor space and a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. The building also

    13. Overview of Computer-Aided Engineering of Batteries and Introduction to Multi-Scale, Multi-Dimensional Modeling of Li-Ion Batteries (Presentation)

      SciTech Connect (OSTI)

      Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.; Lee, K. J.

      2012-05-01

      This 2012 Annual Merit Review presentation gives an overview of the Computer-Aided Engineering of Batteries (CAEBAT) project and introduces the Multi-Scale, Multi-Dimensional model for modeling lithium-ion batteries for electric vehicles.

    14. Information regarding previous INCITE awards including selected highlights

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      | U.S. DOE Office of Science (SC) Information regarding previous INCITE awards including selected highlights Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities User Facilities Accessing ASCR Facilities Innovative & Novel Computational Impact on Theory & Experiement (INCITE) ASCR Leadership Computing Challenge (ALCC) Industrial Users Computational Science Graduate Fellowship (CSGF) Research & Evaluation Prototypes (REP) Science Highlights Benefits of

    15. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Mittal, Sparsh; Vetter, Jeffrey S.

      2015-04-24

      Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

    16. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

      SciTech Connect (OSTI)

      Mittal, Sparsh; Vetter, Jeffrey S.

      2015-04-24

      Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based on their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.

    17. Coupling of Mechanical Behavior of Cell Components to Electrochemical-Thermal Models for Computer- Aided Engineering of Batteries under Abuse

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Coupling of Mechanical Behavior of Cell Components to Electrochemical-Thermal Models for Computer- Aided Engineering of Batteries under Abuse P.I.: Ahmad Pesaran Team: Tomasz Wierzbicki and Elham Sahraei (MIT) Genong Li and Lewis Collins (ANSYS) M. Sprague, G.H. Kim and S. Santhangopalan (NREL) June 17, 2014 This presentation does not contain any proprietary, confidential, or otherwise restricted information. Project ID: ES199 NREL/PR-5400-61885 2 Overview * Project Start: October 2013 * Project

    18. Risk and Vulnerability Assessment Using Cybernomic Computational Models: Tailored for Industrial Control Systems

      SciTech Connect (OSTI)

      Abercrombie, Robert K; Sheldon, Federick T.; Schlicher, Bob G

      2015-01-01

      There are many influencing economic factors to weigh from the defender-practitioner stakeholder point-of-view that involve cost combined with development/deployment models. Some examples include the cost of countermeasures themselves, the cost of training and the cost of maintenance. Meanwhile, we must better anticipate the total cost from a compromise. The return on investment in countermeasures is essentially impact costs (i.e., the costs from violating availability, integrity and confidentiality / privacy requirements). The natural question arises about choosing the main risks that must be mitigated/controlled and monitored in deciding where to focus security investments. To answer this question, we have investigated the cost/benefits to the attacker/defender to better estimate risk exposure. In doing so, it s important to develop a sound basis for estimating the factors that derive risk exposure, such as likelihood that a threat will emerge and whether it will be thwarted. This impact assessment framework can provide key information for ranking cybersecurity threats and managing risk.

    19. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      McClanahan, Richard; De Leon, Phillip L.

      2014-08-20

      The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

    20. A non-CFD modeling system for computing 3D wind and concentration fields in urban environments

      SciTech Connect (OSTI)

      Nelson, Matthew A; Brown, Michael J; Williams, Michael D; Gowardhan, Akshay; Pardyjak, Eric R

      2010-01-01

      The Quick Urban & Industrial Complex (QUIC) Dispersion Modeling System has been developed to rapidly compute the transport and dispersion of toxic agent releases in the vicinity of buildings. It is composed of an empirical-diagnostic wind solver, an 'urbanized' Lagrangian random-walk model, and a graphical user interface. The code has been used for homeland security and environmental air pollution applications. In this paper, we discuss the wind solver methodology and improvements made to the original Roeckle schemes in order to better capture flow fields in dense built-up areas. The mode1-computed wind and concentration fields are then compared to measurements from several field experiments. Improvements to the QUIC Dispersion Modeling System have been made to account for the inhomogeneous and complex building layouts found in large cities. The logic that has been introduced into the code is described and comparisons of model output to full-scale outdoor urban measurements in Oklahoma City and New York City are given. Although far from perfect, the model agreed fairly well with measurements and in many cases performed equally to CFD codes.

    1. Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers

      SciTech Connect (OSTI)

      Tseng, Hsin-Wu Kupinski, Matthew A.; Fan, Jiahua; Sainath, Paavana; Hsieh, Jiang

      2014-07-15

      Purpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors’ task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%–67% (head phantom) and 68%–82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%–75% for head phantom and 67%–77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the

    2. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

      SciTech Connect (OSTI)

      Kersaudy, Pierric; Sudret, Bruno; Varsier, Nadège; Picon, Odile; Wiart, Joe

      2015-04-01

      In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representation of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.

    3. Flow field computation of the NREL S809 airfoil using various turbulence models

      SciTech Connect (OSTI)

      Chang, Y.L.; Yang, S.L.; Arici, O. [Michigan Technological Univ., Houghton, MI (United States). Mechanical Engineering-Engineering Mechanics Dept.

      1996-10-01

      Performance comparison of three popular turbulence models, namely Baldwin-Lomas algebraic model, Chien`s Low-Reynolds-Number {kappa}-{epsilon} model, and Wilcox`s Low-Reynolds-Number {kappa}-{omega} model, is given. These models were applied to calculate the flow field around the National Renewable Energy Laboratory S809 airfoil using Total Variational Diminishing scheme. Numerical results of C{sub P}, C{sub L}, and C{sub D} are presented along with the Delft experimental data. It is shown that all three models perform well for attached flow, i.e., no flow separation at low angles of attack. However, at high angles of attack with flow separation, convergence characteristics show Wilcox`s model outperforms the other models. Results of this study will be used to guide the authors in their dynamic stall research.

    4. Elucidating reactivity regimes in cyclopentane oxidation: Jet stirred reactor experiments, computational chemistry, and kinetic modeling

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Al Rashidi, Mariam J.; Thion, Sebastien; Togbe, Casimir; Dayma, Guillaume; Mehl, Marco; Dagaut, Philippe; Pitz, William J.; Zador, Judit; Sarathy, S. Mani

      2016-06-22

      This study is concerned with the identification and quantification of species generated during the combustion of cyclopentane in a jet stirred reactor (JSR). Experiments were carried out for temperatures between 740 and 1250 K, equivalence ratios from 0.5 to 3.0, and at an operating pressure of 10 atm. The fuel concentration was kept at 0.1% and the residence time of the fuel/O2/N2 mixture was maintained at 0.7 s. The reactant, product, and intermediate species concentration profiles were measured using gas chromatography and Fourier transform infrared spectroscopy. The concentration profiles of cyclopentane indicate inhibition of reactivity between 850-1000 K for φ=2.0more » and φ=3.0. This behavior is interesting, as it has not been observed previously for other fuel molecules, cyclic or non-cyclic. A kinetic model including both low- and high-temperature reaction pathways was developed and used to simulate the JSR experiments. The pressure-dependent rate coefficients of all relevant reactions lying on the PES of cyclopentyl + O2, as well as the C-C and C-H scission reactions of the cyclopentyl radical were calculated at the UCCSD(T)-F12b/cc-pVTZ-F12//M06-2X/6-311++G(d,p) level of theory. The simulations reproduced the unique reactivity trend of cyclopentane and the measured concentration profiles of intermediate and product species. Furthermore, sensitivity and reaction path analyses indicate that this reactivity trend may be attributed to differences in the reactivity of allyl radical at different conditions, and it is highly sensitive to the C-C/C-H scission branching ratio of the cyclopentyl radical decomposition.« less

    5. Application of high performance computing to automotive design and manufacturing: Composite materials modeling task technical manual for constitutive models for glass fiber-polymer matrix composites

      SciTech Connect (OSTI)

      Simunovic, S; Zacharia, T

      1997-11-01

      This report provides a theoretical background for three constitutive models for a continuous strand mat (CSM) glass fiber-thermoset polymer matrix composite. The models were developed during fiscal years 1994 through 1997 as a part of the Cooperative Research and Development Agreement, "Application of High-Performance Computing to Automotive Design and Manufacturing." The full derivation of constitutive relations in the framework of the continuum program DYNA3D and have been used for the simulation and impact analysis of CSM composite tubes. The analysis of simulation and experimental results show that the model based on strain tensor split yields the most accurate results of the three implemented models. The parameters used in the models and their derivation from the physical tests are documented.

    6. Development of a three-phase reacting flow computer model for analysis of petroleum cracking

      SciTech Connect (OSTI)

      Chang, S.L.; Lottes, S.A.; Petrick, M.

      1995-07-01

      A general computational fluid dynamics computer code (ICRKFLO) has been developed for the simulation of the multi-phase reacting flow in a petroleum fluid catalytic cracker riser. ICRKFLO has several unique features. A new integral reaction submodel couples calculations of hydrodynamics and cracking kinetics by making the calculations more efficient in achieving stable convergence while still preserving the major physical effects of reaction processes. A new coke transport submodel handles the process of coke formation in gas phase reactions and the subsequent deposition on the surface of adjacent particles. The code was validated by comparing with experimental results of a pilot scale fluid cracker unit. The code can predict the flow characteristics of gas, liquid, and particulate solid phases, vaporization of the oil droplets, and subsequent cracking of the oil in a riser reactor, which may lead to a better understanding of the internal processes of the riser and the impact of riser geometry and operating parameters on the riser performance.

    7. CFD [computational fluid dynamics] And Safety Factors. Computer modeling of complex processes needs old-fashioned experiments to stay in touch with reality.

      SciTech Connect (OSTI)

      Leishear, Robert A.; Lee, Si Y.; Poirier, Michael R.; Steeper, Timothy J.; Ervin, Robert C.; Giddings, Billy J.; Stefanko, David B.; Harp, Keith D.; Fowley, Mark D.; Van Pelt, William B.

      2012-10-07

      Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a full-scale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A full-scale, low-flow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27-inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied

    8. Final Report, Center for Programming Models for Scalable Parallel Computing: Co-Array Fortran, Grant Number DE-FC02-01ER25505

      SciTech Connect (OSTI)

      Robert W. Numrich

      2008-04-22

      The major accomplishment of this project is the production of CafLib, an 'object-oriented' parallel numerical library written in Co-Array Fortran. CafLib contains distributed objects such as block vectors and block matrices along with procedures, attached to each object, that perform basic linear algebra operations such as matrix multiplication, matrix transpose and LU decomposition. It also contains constructors and destructors for each object that hide the details of data decomposition from the programmer, and it contains collective operations that allow the programmer to calculate global reductions, such as global sums, global minima and global maxima, as well as vector and matrix norms of several kinds. CafLib is designed to be extensible in such a way that programmers can define distributed grid and field objects, based on vector and matrix objects from the library, for finite difference algorithms to solve partial differential equations. A very important extra benefit that resulted from the project is the inclusion of the co-array programming model in the next Fortran standard called Fortran 2008. It is the first parallel programming model ever included as a standard part of the language. Co-arrays will be a supported feature in all Fortran compilers, and the portability provided by standardization will encourage a large number of programmers to adopt it for new parallel application development. The combination of object-oriented programming in Fortran 2003 with co-arrays in Fortran 2008 provides a very powerful programming model for high-performance scientific computing. Additional benefits from the project, beyond the original goal, include a programto provide access to the co-array model through access to the Cray compiler as a resource for teaching and research. Several academics, for the first time, included the co-array model as a topic in their courses on parallel computing. A separate collaborative project with LANL and PNNL showed how to extend the

    9. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

      SciTech Connect (OSTI)

      Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

      2015-01-01

      The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

    10. COMPARATIVE COMPUTATIONAL MODELING OF AIRFLOWS AND VAPOR DOSIMETY IN THE RESPIRATORY TRACTS OF RAT, MONKEY, AND HUMAN

      SciTech Connect (OSTI)

      Corley, Richard A.; Kabilan, Senthil; Kuprat, Andrew P.; Carson, James P.; Minard, Kevin R.; Jacob, Rick E.; Timchalk, Charles; Glenny, Robb W.; Pipavath, Sudhaker; Cox, Timothy C.; Wallis, Chris; Larson, Richard; Fanucchi, M.; Postlewait, Ed; Einstein, Daniel R.

      2012-07-01

      Coupling computational fluid dynamics (CFD) with physiologically based pharmacokinetic (PBPK) models is useful for predicting site-specific dosimetry of airborne materials in the respiratory tract and elucidating the importance of species differences in anatomy, physiology, and breathing patterns. Historically, these models were limited to discrete regions of the respiratory system. CFD/PBPK models have now been developed for the rat, monkey, and human that encompass airways from the nose or mouth to the lung. A PBPK model previously developed to describe acrolein uptake in nasal tissues was adapted to the extended airway models as an example application. Model parameters for each anatomic region were obtained from the literature, measured directly, or estimated from published data. Airflow and site-specific acrolein uptake patterns were determined under steadystate inhalation conditions to provide direct comparisons with prior data and nasalonly simulations. Results confirmed that regional uptake was dependent upon airflow rates and acrolein concentrations with nasal extraction efficiencies predicted to be greatest in the rat, followed by the monkey, then the human. For human oral-breathing simulations, acrolein uptake rates in oropharyngeal and laryngeal tissues were comparable to nasal tissues following nasal breathing under the same exposure conditions. For both breathing modes, higher uptake rates were predicted for lower tracheo-bronchial tissues of humans than either the rat or monkey. These extended airway models provide a unique foundation for comparing dosimetry across a significantly more extensive range of conducting airways in the rat, monkey, and human than prior CFD models.

    11. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Sandia Will Host PV Bankability Workshop at Solar Power International (SPI) 2013 Computational Modeling & Simulation, Distribution Grid Integration, Energy, Facilities, Grid ...

    12. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science and Actuarial Practice" Read More Permalink New Project Is the ACME of Computer Science to Address Climate Change Analysis, Climate, Global Climate & Energy, Modeling, ...

    13. Computing Frontier: Distributed Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Frontier: Distributed Computing and Facility Infrastructures Conveners: Kenneth Bloom 1 , Richard Gerber 2 1 Department of Physics and Astronomy, University of Nebraska-Lincoln 2 National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory 1.1 Introduction The field of particle physics has become increasingly reliant on large-scale computing resources to address the challenges of analyzing large datasets, completing specialized computations and

    14. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost 3 on the TRACC Cluster Oct. ... with an emphasis on applying these capabilities to build computationally efficient models. ...

    15. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    16. Computational modeling of electrostatic charge and fields produced by hypervelocity impact

      SciTech Connect (OSTI)

      Crawford, David A.

      2015-05-19

      Following prior experimental evidence of electrostatic charge separation, electric and magnetic fields produced by hypervelocity impact, we have developed a model of electrostatic charge separation based on plasma sheath theory and implemented it into the CTH shock physics code. Preliminary assessment of the model shows good qualitative and quantitative agreement between the model and prior experiments at least in the hypervelocity regime for the porous carbonate material tested. The model agrees with the scaling analysis of experimental data performed in the prior work, suggesting that electric charge separation and the resulting electric and magnetic fields can be a substantial effect at larger scales, higher impact velocities, or both.

    17. Protein superfamily members as targets for computer modeling: The carbohydrate recognition domain of a macrophage lectin

      SciTech Connect (OSTI)

      Stenkamp, R.E.; Aruffo, A.; Bajorath, J.

      1996-12-31

      Members of protein superfamilies display similar folds, but share only limited sequence identity, often 25% or less. Thus, it is not straightforward to apply standard homology modeling methods to construct reliable three-dimensional models of such proteins. A three-dimensional model of the carbohydrate recognition domain of the rat macrophage lectin, a member of the calcium-dependent (C-type) lectin superfamily, has been generated to illustrate how information provided by comparison of X-ray structures and sequence-structure alignments can aid in comparative modeling when primary sequence similarities are low. 20 refs., 4 figs.

    18. Computational model, method, and system for kinetically-tailoring multi-drug chemotherapy for individuals

      DOE Patents [OSTI]

      Gardner, Shea Nicole

      2007-10-23

      A method and system for tailoring treatment regimens to individual patients with diseased cells exhibiting evolution of resistance to such treatments. A mathematical model is provided which models rates of population change of proliferating and quiescent diseased cells using cell kinetics and evolution of resistance of the diseased cells, and pharmacokinetic and pharmacodynamic models. Cell kinetic parameters are obtained from an individual patient and applied to the mathematical model to solve for a plurality of treatment regimens, each having a quantitative efficacy value associated therewith. A treatment regimen may then be selected from the plurlaity of treatment options based on the efficacy value.

    19. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

      SciTech Connect (OSTI)

      Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.; Ivanov, K. N.

      2012-07-01

      A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied. (authors)

    20. Nuclear Engineering Computer Models for In-Core Fuel Management Analysis.

      Energy Science and Technology Software Center (OSTI)

      1992-06-12

      Version 00 VPI-NECM is a nuclear engineering computer system of modules for in-core fuel management analysis. The system consists of 6 independent programs designed to calculate: (1) FARCON - neutron slowing down and epithermal group constants, (2) SLOCON - thermal neutron spectrum and group constants, (3) DISFAC - slow neutron disadvantage factors, (4) ODOG - solution of a one group neutron diffusion equation, (5) ODMUG - three group criticality problem, (6) FUELBURN - fuel burnupmore » in slow neutron fission reactors.« less

    1. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

      SciTech Connect (OSTI)

      Yock, Adam D. Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

      2014-05-15

      Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

    2. A computational model for three-dimensional jointed media with a single joint set; Yucca Mountain Site Characterization Project

      SciTech Connect (OSTI)

      Koteras, J.R.

      1994-02-01

      This report describes a three-dimensional model for jointed rock or other media with a single set of joints. The joint set consists of evenly spaced joint planes. The normal joint response is nonlinear elastic and is based on a rational polynomial. Joint shear stress is treated as being linear elastic in the shear stress versus slip displacement before attaining a critical stress level governed by a Mohr-Coulomb faction criterion. The three-dimensional model represents an extension of a two-dimensional, multi-joint model that has been in use for several years. Although most of the concepts in the two-dimensional model translate in a straightforward manner to three dimensions, the concept of slip on the joint planes becomes more complex in three dimensions. While slip in two dimensions can be treated as a scalar quantity, it must be treated as a vector in the joint plane in three dimensions. For the three-dimensional model proposed here, the slip direction is assumed to be the direction of maximum principal strain in the joint plane. Five test problems are presented to verify the correctness of the computational implementation of the model.

    3. Climate Models: Rob Jacob | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      --Tribology -Mathematics, computing, & computer science --Cloud computing --Modeling, simulation, & visualization --Petascale & exascale computing --Supercomputing &...

    4. Computational modeling of electrostatic charge and fields produced by hypervelocity impact

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Crawford, David A.

      2015-05-19

      Following prior experimental evidence of electrostatic charge separation, electric and magnetic fields produced by hypervelocity impact, we have developed a model of electrostatic charge separation based on plasma sheath theory and implemented it into the CTH shock physics code. Preliminary assessment of the model shows good qualitative and quantitative agreement between the model and prior experiments at least in the hypervelocity regime for the porous carbonate material tested. The model agrees with the scaling analysis of experimental data performed in the prior work, suggesting that electric charge separation and the resulting electric and magnetic fields can be a substantial effectmore » at larger scales, higher impact velocities, or both.« less

    5. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    6. User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal energy storage coupled with district heating or cooling systems. Volume I. Main text

      SciTech Connect (OSTI)

      Huber, H.D.; Brown, D.R.; Reilly, R.W.

      1982-04-01

      A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. The AQUASTOR model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two principal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains the main text, including introduction, program description, input data instruction, a description of the output, and Appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

    7. Computational modeling of Krypton gas puffs with tailored mass density profiles on Z

      SciTech Connect (OSTI)

      Jennings, Christopher A.; Ampleford, David J.; Lamppa, Derek C.; Hansen, Stephanie B.; Jones, Brent Manley; Harvey-Thompson, Adam James; Jobe, Marc Ronald Lee; Reneker, Joseph; Rochau, Gregory A.; Cuneo, Michael Edward; Strizic, T.

      2015-05-18

      Large diameter multi-shell gas puffs rapidly imploded by high current (~20 MA, ~100 ns) on the Z generator of Sandia National Laboratories are able to produce high-intensity Krypton K-shell emission at ~13 keV. Efficiently radiating at these high photon energies is a significant challenge which requires the careful design and optimization of the gas distribution. To facilitate this, we hydrodynamically model the gas flow out of the nozzle and then model its implosion using a 3-dimensional resistive, radiative MHD code (GORGON). This approach enables us to iterate between modeling the implosion and gas flow from the nozzle to optimize radiative output from this combined system. Furthermore, guided by our implosion calculations, we have designed gas profiles that help mitigate disruption from Magneto-Rayleigh–Taylor implosion instabilities, while preserving sufficient kinetic energy to thermalize to the high temperatures required for K-shell emission.

    8. Computational modeling of Krypton gas puffs with tailored mass density profiles on Z.

      SciTech Connect (OSTI)

      Jennings, Christopher A.; Ampleford, David J.; Lamppa, Derek C.; Hansen, Stephanie B.; Jones, Brent Manley; Harvey-Thompson, Adam James; Jobe, Marc Ronald Lee; Reneker, Joseph; Rochau, Gregory A.; Cuneo, Michael Edward; Strizic, T.

      2015-05-18

      Large diameter multi-shell gas puffs rapidly imploded by high current (~20 MA, ~100 ns) on the Z generator of Sandia National Laboratories are able to produce high-intensity Krypton K-shell emission at ~13 keV. Efficiently radiating at these high photon energies is a significant challenge which requires the careful design and optimization of the gas distribution. To facilitate this, we hydrodynamically model the gas flow out of the nozzle and then model its implosion using a 3-dimensional resistive, radiative MHD code (GORGON). This approach enables us to iterate between modeling the implosion and gas flow from the nozzle to optimize radiative output from this combined system. Furthermore, guided by our implosion calculations, we have designed gas profiles that help mitigate disruption from Magneto-Rayleigh–Taylor implosion instabilities, while preserving sufficient kinetic energy to thermalize to the high temperatures required for K-shell emission.

    9. Computational modeling of structure of metal matrix composite in centrifugal casting process

      SciTech Connect (OSTI)

      Zagorski, Roman [Department of Electrotechnology, Faculty of Materials Science and Metallurgy, Silesian University of Technology, ul. Krasinskiego 8, 40-019, Katowice (Poland)

      2007-04-07

      The structure of alumina matrix composite reinforced with crystalline particles obtained during centrifugal casting process are studied. Several parameters of cast process like pouring temperature, temperature, rotating speed and size of casting mould which influent on structure of composite are examined. Segregation of crystalline particles depended on other factors such as: the gradient of density of the liquid matrix and reinforcement, thermal processes connected with solidifying of the cast, processes leading to changes in physical and structural properties of liquid composite are also investigated. All simulation are carried out by CFD program Fluent. Numerical simulations are performed using the FLUENT two-phase free surface (air and matrix) unsteady flow model (volume of fluid model - VOF) and discrete phase model (DPM)

    10. Computational modeling of Krypton gas puffs with tailored mass density profiles on Z

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Jennings, Christopher A.; Ampleford, David J.; Lamppa, Derek C.; Hansen, Stephanie B.; Jones, Brent Manley; Harvey-Thompson, Adam James; Jobe, Marc Ronald Lee; Reneker, Joseph; Rochau, Gregory A.; Cuneo, Michael Edward; et al

      2015-05-18

      Large diameter multi-shell gas puffs rapidly imploded by high current (~20 MA, ~100 ns) on the Z generator of Sandia National Laboratories are able to produce high-intensity Krypton K-shell emission at ~13 keV. Efficiently radiating at these high photon energies is a significant challenge which requires the careful design and optimization of the gas distribution. To facilitate this, we hydrodynamically model the gas flow out of the nozzle and then model its implosion using a 3-dimensional resistive, radiative MHD code (GORGON). This approach enables us to iterate between modeling the implosion and gas flow from the nozzle to optimize radiativemore » output from this combined system. Furthermore, guided by our implosion calculations, we have designed gas profiles that help mitigate disruption from Magneto-Rayleigh–Taylor implosion instabilities, while preserving sufficient kinetic energy to thermalize to the high temperatures required for K-shell emission.« less

    11. Computational modeling of Krypton gas puffs with tailored mass density profiles on Z

      SciTech Connect (OSTI)

      Jennings, C. A.; Ampleford, D. J.; Lamppa, D. C.; Hansen, S. B.; Jones, B.; Harvey-Thompson, A. J.; Jobe, M.; Strizic, T.; Reneker, J.; Rochau, G. A.; Cuneo, M. E.

      2015-05-15

      Large diameter multi-shell gas puffs rapidly imploded by high current (?20 MA, ?100?ns) on the Z generator of Sandia National Laboratories are able to produce high-intensity Krypton K-shell emission at ?13?keV. Efficiently radiating at these high photon energies is a significant challenge which requires the careful design and optimization of the gas distribution. To facilitate this, we hydrodynamically model the gas flow out of the nozzle and then model its implosion using a 3-dimensional resistive, radiative MHD code (GORGON). This approach enables us to iterate between modeling the implosion and gas flow from the nozzle to optimize radiative output from this combined system. Guided by our implosion calculations, we have designed gas profiles that help mitigate disruption from Magneto-RayleighTaylor implosion instabilities, while preserving sufficient kinetic energy to thermalize to the high temperatures required for K-shell emission.

    12. Orbital-selective Mott phases of a one-dimensional three-orbital Hubbard model studied using computational techniques

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Liu, Guangkun; Kaushal, Nitin; Liu, Shaozhi; Bishop, Christopher B.; Wang, Yan; Johnston, Steve; Alvarez, Gonzalo; Moreo, Adriana; Dagotto, Elbio R.

      2016-06-24

      A recently introduced one-dimensional three-orbital Hubbard model displays orbital-selective Mott phases with exotic spin arrangements such as spin block states [J. Rincón et al., Phys. Rev. Lett. 112, 106405 (2014)]. In this paper we show that the constrained-path quantum Monte Carlo (CPQMC) technique can accurately reproduce the phase diagram of this multiorbital one-dimensional model, paving the way to future CPQMC studies in systems with more challenging geometries, such as ladders and planes. The success of this approach relies on using the Hartree-Fock technique to prepare the trial states needed in CPQMC. In addition, we study a simplified version of themore » model where the pair-hopping term is neglected and the Hund coupling is restricted to its Ising component. The corresponding phase diagrams are shown to be only mildly affected by the absence of these technically difficult-to-implement terms. This is confirmed by additional density matrix renormalization group and determinant quantum Monte Carlo calculations carried out for the same simplified model, with the latter displaying only mild fermion sign problems. Lastly, we conclude that these methods are able to capture quantitatively the rich physics of the several orbital-selective Mott phases (OSMP) displayed by this model, thus enabling computational studies of the OSMP regime in higher dimensions, beyond static or dynamic mean-field approximations.« less

    13. Unveiling Stability Criteria of DNA-Carbon Nanotubes Constructs by Scanning Tunneling Microscopy and Computational Modeling

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Kilina, Svetlana; Yarotski, Dzmitry A.; Talin, A. Alec; Tretiak, Sergei; Taylor, Antoinette J.; Balatsky, Alexander V.

      2011-01-01

      We present a combined approach that relies on computational simulations and scanning tunneling microscopy (STM) measurements to reveal morphological properties and stability criteria of carbon nanotube-DNA (CNT-DNA) constructs. Application of STM allows direct observation of very stable CNT-DNA hybrid structures with the well-defined DNA wrapping angle of 63.4 ° and a coiling period of 3.3 nm. Using force field simulations, we determine how the DNA-CNT binding energy depends on the sequence and binding geometry of a single strand DNA. This dependence allows us to quantitatively characterize the stability of a hybrid structure with an optimal π-stacking between DNA nucleotides andmore » the tube surface and better interpret STM data. Our simulations clearly demonstrate the existence of a very stable DNA binding geometry for (6,5) CNT as evidenced by the presence of a well-defined minimum in the binding energy as a function of an angle between DNA strand and the nanotube chiral vector. This novel approach demonstrates the feasibility of CNT-DNA geometry studies with subnanometer resolution and paves the way towards complete characterization of the structural and electronic properties of drug-delivering systems based on DNA-CNT hybrids as a function of DNA sequence and a nanotube chirality.« less

    14. DFT modeling of adsorption onto uranium metal using large-scale parallel computing

      SciTech Connect (OSTI)

      Davis, N.; Rizwan, U.

      2013-07-01

      There is a dearth of atomistic simulations involving the surface chemistry of 7-uranium which is of interest as the key fuel component of a breeder-burner stage in future fuel cycles. Recent availability of high-performance computing hardware and software has rendered extended quantum chemical surface simulations involving actinides feasible. With that motivation, data for bulk and surface 7-phase uranium metal are calculated in the plane-wave pseudopotential density functional theory method. Chemisorption of atomic hydrogen and oxygen on several un-relaxed low-index faces of 7-uranium is considered. The optimal adsorption sites (calculated cohesive energies) on the (100), (110), and (111) faces are found to be the one-coordinated top site (8.8 eV), four-coordinated center site (9.9 eV), and one-coordinated top 1 site (7.9 eV) respectively, for oxygen; and the four-coordinated center site (2.7 eV), four-coordinated center site (3.1 eV), and three-coordinated top2 site (3.2 eV) for hydrogen. (authors)

    15. COMPUTATIONAL AND EXPERIMENTAL MODELING OF THREE-PHASE SLURRY-BUBBLE COLUMN REACTOR

      SciTech Connect (OSTI)

      Isaac K. Gamwo; Dimitri Gidaspow

      1999-09-01

      Considerable progress has been achieved in understanding three-phase reactors from the point of view of kinetic theory. In a paper in press for publication in Chemical Engineering Science (Wu and Gidaspow, 1999) we have obtained a complete numerical solution of bubble column reactors. In view of the complexity of the simulation a better understanding of the processes using simplified analytical solutions is required. Such analytical solutions are presented in the attached paper, Large Scale Oscillations or Gravity Waves in Risers and Bubbling Beds. This paper presents analytical solutions for bubbling frequencies and standing wave flow patterns. The flow patterns in operating slurry bubble column reactors are not optimum. They involve upflow in the center and downflow at the walls. It may be possible to control flow patterns by proper redistribution of heat exchangers in slurry bubble column reactors. We also believe that the catalyst size in operating slurry bubble column reactors is not optimum. To obtain an optimum size we are following up on the observation of George Cody of Exxon who reported a maximum granular temperature (random particle kinetic energy) for a particle size of 90 microns. The attached paper, Turbulence of Particles in a CFB and Slurry Bubble Columns Using Kinetic Theory, supports George Cody's observations. However, our explanation for the existence of the maximum in granular temperature differs from that proposed by George Cody. Further computer simulations and experiments involving measurements of granular temperature are needed to obtain a sound theoretical explanation for the possible existence of an optimum catalyst size.

    16. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw; Gokhale, Maya B.; McCabe, Kevin Peter

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    17. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    18. Development of an Extensible Computational Framework for Centralized Storage and Distributed Curation and Analysis of Genomic Data Genome-scale Metabolic Models

      SciTech Connect (OSTI)

      Stevens, Rick

      2010-08-01

      The DOE funded KBase project of the Stevens group at the University of Chicago was focused on four high-level goals: (i) improve extensibility, accessibility, and scalability of the SEED framework for genome annotation, curation, and analysis; (ii) extend the SEED infrastructure to support transcription regulatory network reconstructions (2.1), metabolic model reconstruction and analysis (2.2), assertions linked to data (2.3), eukaryotic annotation (2.4), and growth phenotype prediction (2.5); (iii) develop a web-API for programmatic remote access to SEED data and services; and (iv) application of all tools to bioenergy-related genomes and organisms. In response to these goals, we enhanced and improved the ModelSEED resource within the SEED to enable new modeling analyses, including improved model reconstruction and phenotype simulation. We also constructed a new website and web-API for the ModelSEED. Further, we constructed a comprehensive web-API for the SEED as a whole. We also made significant strides in building infrastructure in the SEED to support the reconstruction of transcriptional regulatory networks by developing a pipeline to identify sets of consistently expressed genes based on gene expression data. We applied this pipeline to 29 organisms, computing regulons which were subsequently stored in the SEED database and made available on the SEED website (http://pubseed.theseed.org). We developed a new pipeline and database for the use of kmers, or short 8-residue oligomer sequences, to annotate genomes at high speed. Finally, we developed the PlantSEED, or a new pipeline for annotating primary metabolism in plant genomes. All of the work performed within this project formed the early building blocks for the current DOE Knowledgebase system, and the kmer annotation pipeline, plant annotation pipeline, and modeling tools are all still in use in KBase today.

    19. The Use Of Computational Human Performance Modeling As Task Analysis Tool

      SciTech Connect (OSTI)

      Jacuqes Hugo; David Gertman

      2012-07-01

      During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employed to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.

    20. Mathematical and computational modeling of the diffraction problems by discrete singularities method

      SciTech Connect (OSTI)

      Nesvit, K. V.

      2014-11-12

      The main objective of this study is reduced the boundary-value problems of scattering and diffraction waves on plane-parallel structures to the singular or hypersingular integral equations. For these cases we use a method of the parametric representations of the integral and pseudo-differential operators. Numerical results of the model scattering problems on periodic and boundary gratings and also on the gratings above a flat screen reflector are presented in this paper.

    1. Computational Intelligence Based Data Fusion Algorithm for Dynamic sEMG and Skeletal Muscle Force Modelling

      SciTech Connect (OSTI)

      Chandrasekhar Potluri,; Madhavi Anugolu; Marco P. Schoen; D. Subbaram Naidu

      2013-08-01

      In this work, an array of three surface Electrography (sEMG) sensors are used to acquired muscle extension and contraction signals for 18 healthy test subjects. The skeletal muscle force is estimated using the acquired sEMG signals and a Non-linear Wiener Hammerstein model, relating the two signals in a dynamic fashion. The model is obtained from using System Identification (SI) algorithm. The obtained force models for each sensor are fused using a proposed fuzzy logic concept with the intent to improve the force estimation accuracy and resilience to sensor failure or misalignment. For the fuzzy logic inference system, the sEMG entropy, the relative error, and the correlation of the force signals are considered for defining the membership functions. The proposed fusion algorithm yields an average of 92.49% correlation between the actual force and the overall estimated force output. In addition, the proposed fusionbased approach is implemented on a test platform. Experiments indicate an improvement in finger/hand force estimation.

    2. Data aNd Computation Reordering package using temporal and spatial hypergraphs

      Energy Science and Technology Software Center (OSTI)

      2004-08-01

      A package for experimentation with data and computation reordering algorithms. One can input various file formats representing sparse matrices, reorder data, and computation through the specification of command line parameters, and time benchmark computations that use the new data and computation ordering. The package includes existing reordering algorithms and new ones introduced by the authors based on the temporal and spatial locality hypergraph model.

    3. Computer, Computational, and Statistical Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCS Computer, Computational, and Statistical Sciences Computational physics, computer science, applied mathematics, statistics and the integration of large data streams are central ...

    4. Mathematical and Computational Epidemiology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    5. A Computational Model of the Mark-IV Electrorefiner: Phase I -- Fuel Basket/Salt Interface

      SciTech Connect (OSTI)

      Robert Hoover; Supathorn Phongikaroon; Shelly Li; Michael Simpson; Tae-Sic Yoo

      2009-09-01

      Spent driver fuel from the Experimental Breeder Reactor-II (EBR-II) is currently being treated in the Mk-IV electrorefiner (ER) in the Fuel Conditioning Facility (FCF) at Idaho National Laboratory. The modeling approach to be presented here has been developed to help understand the effect of different parameters on the dynamics of this system. The first phase of this new modeling approach focuses on the fuel basket/salt interface involving the transport of various species found in the driver fuels (e.g. uranium and zirconium). This approach minimizes the guessed parameters to only one, the exchange current density (i0). U3+ and Zr4+ were the only species used for the current study. The result reveals that most of the total cell current is used for the oxidation of uranium, with little being used by zirconium. The dimensionless approach shows that the total potential is a strong function of i0 and a weak function of wt% of uranium in the salt system for initiation processes.

    6. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    7. Toward the standard population synthesis model of the X-ray background: Evolution of X-ray luminosity and absorption functions of active galactic nuclei including Compton-thick populations

      SciTech Connect (OSTI)

      Ueda, Yoshihiro; Akiyama, Masayuki; Hasinger, Gnther; Miyaji, Takamitsu; Watson, Michael G.

      2014-05-10

      We present the most up to date X-ray luminosity function (XLF) and absorption function of active galactic nuclei (AGNs) over the redshift range from 0 to 5, utilizing the largest, highly complete sample ever available obtained from surveys performed with Swift/BAT, MAXI, ASCA, XMM-Newton, Chandra, and ROSAT. The combined sample, including that of the Subaru/XMM-Newton Deep Survey, consists of 4039 detections in the soft (0.5-2 keV) and/or hard (>2 keV) band. We utilize a maximum likelihood method to reproduce the count rate versus redshift distribution for each survey, by taking into account the evolution of the absorbed fraction, the contribution from Compton-thick (CTK) AGNs, and broadband spectra of AGNs, including reflection components from tori based on the luminosity- and redshift-dependent unified scheme. We find that the shape of the XLF at z ? 1-3 is significantly different from that in the local universe, for which the luminosity-dependent density evolution model gives much better description than the luminosity and density evolution model. These results establish the standard population synthesis model of the X-ray background (XRB), which well reproduces the source counts, the observed fractions of CTK AGNs, and the spectrum of the hard XRB. The number ratio of CTK AGNs to the absorbed Compton-thin (CTN) AGNs is constrained to be ?0.5-1.6 to produce the 20-50 keV XRB intensity within present uncertainties, by assuming that they follow the same evolution as CTN AGNs. The growth history of supermassive black holes is discussed based on the new AGN bolometric luminosity function.

    8. Computer modeling of Y-Ba-Cu-O thin film deposition and growth

      SciTech Connect (OSTI)

      Burmester, C.; Gronsky, R. ); Wille, L. . Dept. of Physics)

      1991-07-01

      The deposition and growth of epitaxial thin films of YBa{sub 2}Cu{sub 3}O{sub 7} are modeled by means of Monte Carlo simulations of the deposition and diffusion of Y, Ba, and Cu oxide particles. This complements existing experimental characterization techniques to allow the study of kinetic phenomena expected to play a dominant role in the inherently non-equilibrium thin film deposition process. Surface morphologies and defect structures obtained in the simulated films are found to closely resemble those observed experimentally. A systematic study of the effects of deposition rate and substrate temperature during in-situ film fabrication reveals that the kinetics of film growth can readily dominate the structural formation of the thin film. 16 refs., 4 figs.

    9. User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal-energy storage oupled with district-heating or cooling systems. Volume II. Appendices

      SciTech Connect (OSTI)

      Huber, H.D.; Brown, D.R.; Reilly, R.W.

      1982-04-01

      A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. the AQUASTOR Model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two prinicpal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains all the appendices, including supply and distribution system cost equations and models, descriptions of predefined residential districts, key equations for the cooling degree-hour methodology, a listing of the sample case output, and appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

    10. Validation of a Monte Carlo model used for simulating tube current modulation in computed tomography over a wide range of phantom conditions/challenges

      SciTech Connect (OSTI)

      Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; DeMarco, John J.

      2014-11-01

      Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purpose of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain

    11. Inference of tumor evolution during chemotherapy by computational modeling and in situ analysis of genetic and phenotypic cellular diversity

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Almendro, Vanessa; Cheng, Yu -Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muñoz, Montse; Russnes, Hege  G.; Helland, Åslaug; et al

      2014-02-01

      Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatialmore » distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.« less

    12. Inference of tumor evolution during chemotherapy by computational modeling and in situ analysis of genetic and phenotypic cellular diversity

      SciTech Connect (OSTI)

      Almendro, Vanessa; Cheng, Yu -Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muñoz, Montse; Russnes, Hege  G.; Helland, Åslaug; Rye, Inga  H.; Borresen-Dale, Anne -Lise; Maruyama, Reo; van Oudenaarden, Alexander; Dowsett, Mitchell; Jones, Robin  L.; Reis-Filho, Jorge; Gascon, Pere; Gönen, Mithat; Michor, Franziska; Polyak, Kornelia

      2014-02-01

      Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatial distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.

    13. ASCR Workshop on Quantum Computing for Science

      SciTech Connect (OSTI)

      Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

      2015-06-01

      This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

    14. Computational Study of Bond Dissociation Enthalpies for Substituted $\\beta$-O-4 Lignin Model Compounds

      SciTech Connect (OSTI)

      Younker, Jarod M; Beste, Ariana; Buchanan III, A C

      2011-01-01

      The biopolymer lignin is a potential source of valuable chemicals. Phenethyl phenyl ether (PPE) is representative of the dominant $\\beta$-O-4 ether linkage. Density functional theory (DFT) is used to calculate the Boltzmann-weighted carbon-oxygen and carbon-carbon bond dissociation enthalpies (BDEs) of substituted PPE. These values are important in order to understand lignin decomposition. Exclusion of all conformers that have distributions of less than 5\\% at 298 K impacts the BDE by less than 1 kcal mol$^{-1}$. We find that aliphatic hydroxyl/methylhydroxyl substituents introduce only small changes to the BDEs (0-3 kcal mol$^{-1}$). Substitution on the phenyl ring at the $ortho$ position substantially lowers the C-O BDE, except in combination with the hydroxyl/methylhydroxyl substituents, where the effect of methoxy substitution is reduced by hydrogen bonding. Hydrogen bonding between the aliphatic substituents and the ether oxygen in the PPE derivatives has a significant influence on the BDE. CCSD(T)-calculated BDEs and hydrogen bond strengths of $ortho$-substituted anisoles when compared with M06-2X values confirm that the latter method is sufficient to describe the molecules studied and provide an important benchmark for lignin model compounds.

    15. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    16. Michael Levitt and Computational Biology

      Office of Scientific and Technical Information (OSTI)

      ... Additional Web Pages: 3 Scientists Win Chemistry Nobel for Complex Computer Modeling, npr Stanford's Nobel Chemistry Prize Honors Computer Science, San Jose Mercury News Without ...

    17. TriBITS lifecycle model. Version 1.0, a lean/agile software lifecycle model for research-based computational science and engineering and applied mathematical software.

      SciTech Connect (OSTI)

      Willenbring, James M.; Bartlett, Roscoe Ainsworth; Heroux, Michael Allen

      2012-01-01

      Software lifecycles are becoming an increasingly important issue for computational science and engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process - respecting the competing needs of research vs. production - cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects that are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Here, we advocate three to four phases or maturity levels that address the appropriate handling of many issues associated with the transition from research to production software. The goals of this lifecycle model are to better communicate maturity levels with customers and to help to identify and promote Software Engineering (SE) practices that will help to improve productivity and produce better software. An important collection of software in this domain is Trilinos, which is used as the motivation and the initial target for this lifecycle model. However, many other related and similar CSE (and non-CSE) software projects can also make good use of this lifecycle model, especially those that use the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.

    18. Automated Office Systems Support (AOSS) Quality Assurance Model...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      A quality assurance model, including checklists, for activity relative to network and desktop computer support. Automated Office Systems Support (AOSS) Quality Assurance Model ...

    19. Scientific computations section monthly report, November 1993

      SciTech Connect (OSTI)

      Buckner, M.R.

      1993-12-30

      This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

    20. Sandia National Laboratories: Advanced Simulation and Computing:

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Systems & Software Environment Computational Systems & Software Environment Advanced Simulation and Computing Computational Systems & Software Environment Integrated Codes Physics & Engineering Models Verification & Validation Facilities Operation & User Support Research & Collaboration Contact ASC Advanced Simulation and Computing Computational Systems & Software Environment Crack Modeling The Computational Systems & Software Environment

    1. Inter-comparison of Computer Codes for TRISO-based Fuel Micro-Modeling and Performance Assessment

      SciTech Connect (OSTI)

      Brian Boer; Chang Keun Jo; Wen Wu; Abderrafi M. Ougouag; Donald McEachren; Francesco Venneri

      2010-10-01

      The Next Generation Nuclear Plant (NGNP), the Deep Burn Pebble Bed Reactor (DB-PBR) and the Deep Burn Prismatic Block Reactor (DB-PMR) are all based on fuels that use TRISO particles as their fundamental constituent. The TRISO particle properties include very high durability in radiation environments, hence the designs reliance on the TRISO to form the principal barrier to radioactive materials release. This durability forms the basis for the selection of this fuel type for applications such as Deep Bun (DB), which require exposures up to four times those expected for light water reactors. It follows that the study and prediction of the durability of TRISO particles must be carried as part of the safety and overall performance characterization of all the designs mentioned above. Such evaluations have been carried out independently by the performers of the DB project using independently developed codes. These codes, PASTA, PISA and COPA, incorporate models for stress analysis on the various layers of the TRISO particle (and of the intervening matrix material for some of them), model for fission products release and migration then accumulation within the SiC layer of the TRISO particle, just next to the layer, models for free oxygen and CO formation and migration to the same location, models for temperature field modeling within the various layers of the TRISO particle and models for the prediction of failure rates. All these models may be either internal to the code or external. This large number of models and the possibility of different constitutive data and model formulations and the possibility of a variety of solution techniques makes it highly unlikely that the model would give identical results in the modeling of identical situations. The purpose of this paper is to present the results of an inter-comparison between the codes and to identify areas of agreement and areas that need reconciliation. The inter-comparison has been carried out by the cooperating

    2. Computational fluid dynamics modeling of chemical looping combustion process with calcium sulphate oxygen carrier - article no. A19

      SciTech Connect (OSTI)

      Baosheng Jin; Rui Xiao; Zhongyi Deng; Qilei Song

      2009-07-01

      To concentrate CO{sub 2} in combustion processes by efficient and energy-saving ways is a first and very important step for its sequestration. Chemical looping combustion (CLC) could easily achieve this goal. A chemical-looping combustion system consists of a fuel reactor and an air reactor. Two reactors in the form of interconnected fluidized beds are used in the process: (1) a fuel reactor where the oxygen carrier is reduced by reaction with the fuel, and (2) an air reactor where the reduced oxygen carrier from the fuel reactor is oxidized with air. The outlet gas from the fuel reactor consists of CO{sub 2} and H{sub 2}O, while the outlet gas stream from the air reactor contains only N{sub 2} and some unused O{sub 2}. The water in combustion products can be easily removed by condensation and pure carbon dioxide is obtained without any loss of energy for separation. Until now, there is little literature about mathematical modeling of chemical-looping combustion using the computational fluid dynamics (CFD) approach. In this work, the reaction kinetic model of the fuel reactor (CaSO{sub 4}+ H{sub 2}) is developed by means of the commercial code FLUENT and the effects of partial pressure of H{sub 2} (concentration of H{sub 2}) on chemical looping combustion performance are also studied. The results show that the concentration of H{sub 2} could enhance the CLC performance.

    3. Center for Programming Models for Scalable Parallel Computing - Towards Enhancing OpenMP for Manycore and Heterogeneous Nodes

      SciTech Connect (OSTI)

      Barbara Chapman

      2012-02-01

      OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close to DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.

    4. Argonne's Laboratory computing center - 2007 annual report.

      SciTech Connect (OSTI)

      Bair, R.; Pieper, G. W.

      2008-05-28

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and

    5. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

      SciTech Connect (OSTI)

      Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

      2013-11-15

      Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

    6. Parallel computing works

      SciTech Connect (OSTI)

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    7. The Computational Physics Program of the national MFE Computer Center

      SciTech Connect (OSTI)

      Mirin, A.A.

      1989-01-01

      Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.

    8. Computational Modeling & Simulation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... across energy technologies to effectively address ... participating in the Wind Turbine Radar Interference ... Association AWEA WindPower 2015 event in Orlando, Florida. ...

    9. Computational Modeling & Simulation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management Programs Advanced Nuclear Energy Nuclear

    10. Computational Modeling & Simulation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      3 - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management Programs Advanced Nuclear Energy Nuclear

    11. Computational Analysis of the Pyrolysis of ..beta..-O4 Lignin Model Compounds: Concerted vs. Homolytic Fragmentation

      SciTech Connect (OSTI)

      Clark, J. M.; Robichaud, D. J.; Nimlos, M. R.

      2012-01-01

      The thermochemical conversion of biomass to liquid transportation fuels is a very attractive technology for expanding the utilization of carbon neutral processes and reducing dependency on fossil fuel resources. As with all such emerging technologies, biomass conversion through gasification or pyrolysis has a number of obstacles that need to be overcome to make these processes cost competitive with the refining of fossil fuels. Our current efforts have focused on the investigation of the thermochemistry of the linkages between lignin units using ab initio calculations on dimeric lignin model compounds. All calculations were carried out using M062X density functional theory at the 6-311++G(d,p) basis set. The M062X method has been shown to be consistent with the CBS-QB3 method while being significantly less computationally expensive. To date we have only completed the study on the b-O4 compounds. The theoretical calculations performed in the study indicate that concerted elimination pathways dominate over bond homolysis reactions under typical pyrolysis conditions. However, this does not mean that concerted elimination will be the dominant loss process for lignin. Bimolecular radical chemistry could very well dwarf the unimolecular pathways investigated in this study. These concerted pathways tend to form stable, reasonably non-reactive products that would be more suited producing a fungible bio-oil for the production of liquid transportation fuels.

    12. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

      SciTech Connect (OSTI)

      Katya Le Blanc; Johanna Oxstrand

      2012-04-01

      The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

    13. Communication with U.S. federal decision makers : a primer with notes on the use of computer models as a means of communication.

      SciTech Connect (OSTI)

      Webb, Erik Karl; Tidwell, Vincent Carroll

      2009-10-01

      This document outlines ways to more effectively communicate with U.S. Federal decision makers by outlining the structure, authority, and motivations of various Federal groups, how to find the trusted advisors, and how to structure communication. All three branches of Federal governments have decision makers engaged in resolving major policy issues. The Legislative Branch (Congress) negotiates the authority and the resources that can be used by the Executive Branch. The Executive Branch has some latitude in implementation and prioritizing resources. The Judicial Branch resolves disputes. The goal of all decision makers is to choose and implement the option that best fits the needs and wants of the community. However, understanding the risk of technical, political and/or financial infeasibility and possible unintended consequences is extremely difficult. Primarily, decision makers are supported in their deliberations by trusted advisors who engage in the analysis of options as well as the day-to-day tasks associated with multi-party negotiations. In the best case, the trusted advisors use many sources of information to inform the process including the opinion of experts and if possible predictive analysis from which they can evaluate the projected consequences of their decisions. The paper covers the following: (1) Understanding Executive and Legislative decision makers - What can these decision makers do? (2) Finding the target audience - Who are the internal and external trusted advisors? (3) Packaging the message - How do we parse and integrate information, and how do we use computer simulation or models in policy communication?

    14. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    15. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

      SciTech Connect (OSTI)

      Saffer, Shelley I.

      2014-12-01

      This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

    16. ATOMIC-SCALE DESIGN OF IRON FISCHER-TROPSCH CATALYSTS: A COMBINED COMPUTATIONAL CHEMISTRY, EXPERIMENTAL, AND MICROKINETIC MODELING APPROACH

      SciTech Connect (OSTI)

      Manos Mavrikakis; James A. Dumesic; Amit A. Gokhale; Rahul P. Nabar; Calvin H. Bartholomew; Hu Zou; Brian Critchfield

      2005-03-22

      Efforts during this first year focused on four areas: (1) searching/summarizing published FTS mechanistic and kinetic studies of FTS reactions on iron catalysts; (2) construction of mass spectrometer-TPD and Berty CSTR reactor systems; (3) preparation and characterization of unsupported iron and alumina-supported iron catalysts at various iron loadings (4) Determination of thermochemical parameters such as binding energies of reactive intermediates, heat of FTS elementary reaction steps, and kinetic parameters such as activation energies, and frequency factors of FTS elementary reaction steps on a number of model surfaces. Literature describing mechanistic and kinetic studies of Fischer-Tropsch synthesis on iron catalysts was compiled in a draft review. Construction of the mass spectrometer-TPD system is 90% complete and of a Berty CSTR reactor system 98% complete. Three unsupported iron catalysts and three alumina-supported iron catalysts were prepared by nonaqueous-evaporative deposition (NED) or aqueous impregnation (AI) and characterized by chemisorption, BET, extent-of-reduction, XRD, and TEM methods. These catalysts, covering a wide range of dispersions and metal loadings, are well-reduced and relatively thermally stable up to 500-600 C in H{sub 2}, thus ideal for kinetic and mechanistic studies. The alumina-supported iron catalysts will be used for kinetic and mechanistic studies. In the coming year, adsorption/desorption properties, rates of elementary steps, and global reaction rates will be measured for these catalysts, with and without promoters, providing a database for understanding effects of dispersion, metal loading, and support on elementary kinetic parameters and for validation of computational models that incorporate effects of surface structure and promoters. Furthermore, using state-of-the-art self-consistent Density Functional Theory (DFT) methods, we have extensively studied the thermochemistry and kinetics of various elementary steps on

    17. Technology for Increasing Geothermal Energy Productivity. Computer Models to Characterize the Chemical Interactions of Goethermal Fluids and Injectates with Reservoir Rocks, Wells, Surface Equiptment

      SciTech Connect (OSTI)

      Nancy Moller Weare

      2006-07-25

      This final report describes the results of a research program we carried out over a five-year (3/1999-9/2004) period with funding from a Department of Energy geothermal FDP grant (DE-FG07-99ID13745) and from other agencies. The goal of research projects in this program were to develop modeling technologies that can increase the understanding of geothermal reservoir chemistry and chemistry-related energy production processes. The ability of computer models to handle many chemical variables and complex interactions makes them an essential tool for building a fundamental understanding of a wide variety of complex geothermal resource and production chemistry. With careful choice of methodology and parameterization, research objectives were to show that chemical models can correctly simulate behavior for the ranges of fluid compositions, formation minerals, temperature and pressure associated with present and near future geothermal systems as well as for the very high PT chemistry of deep resources that is intractable with traditional experimental methods. Our research results successfully met these objectives. We demonstrated that advances in physical chemistry theory can be used to accurately describe the thermodynamics of solid-liquid-gas systems via their free energies for wide ranges of composition (X), temperature and pressure. Eight articles on this work were published in peer-reviewed journals and in conference proceedings. Four are in preparation. Our work has been presented at many workshops and conferences. We also considerably improved our interactive web site (geotherm.ucsd.edu), which was in preliminary form prior to the grant. This site, which includes several model codes treating different XPT conditions, is an effective means to transfer our technologies and is used by the geothermal community and other researchers worldwide. Our models have wide application to many energy related and other important problems (e.g., scaling prediction in petroleum

    18. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

      SciTech Connect (OSTI)

      Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

      2011-02-01

      Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

    19. MELCOR computer code manuals

      SciTech Connect (OSTI)

      Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.

      1995-03-01

      MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

    20. 2-D computer modeling of oil generation and migration in a Transect of the Eastern Venezuela Basin

      SciTech Connect (OSTI)

      Gallango, O. ); Parnaud, F. )

      1993-02-01

      The aim of the study was a two-dimensional computer simulation of the basin evolution based on available geological, geophysical, geochemical, geothermal, and hydrodynamic data with the main purpose of determining the hydrocarbon generation and migration history. The modeling was done in two geological sections (platform and pre-thrusting) located along the Chacopata-Uverito Transect in the Eastern Venezuelan Basin. In the platform section an hypothetic source rock equivalent to the Gyayuta Group was considered in order to simulate the migration of hydrocarbons. The thermal history reconstruction of hypothetic source rock confirms that this source rock does not reach the oil window before the middle Miocene and that the maturity in this sector is due to the sedimentation of the Freites, La Pica, and Mesa-Las Piedras formations. The oil expulsion and migration from this hypothetic source rock began after middle Miocene time. The expulsion of the hydrocarbons took place mainly along the Oligocene-Miocene reservoir and do not reach at the present time zones located beyond of the Oritupano field, which imply that the oil accumulated in south part of the basin was generated by a source rock located to the north, in the actual deformation zone. Since 17 m.y. ago water migration pattern from north to south was observed in this section. In the pre-thrusting section the hydrocarbon expulsion started during the early Tertiary and took place mainly toward the lower Cretaceous (El Cantil and Barranquim formations). At the end of the passive margin the main migration occur across the Merecure reservoir, through which the hydrocarbon migrated forward to the Onado sector before the thrusting.

    1. Computational Fluid Dynamics Modeling of the Bonneville Project: Tailrace Spill Patterns for Low Flows and Corner Collector Smolt Egress

      SciTech Connect (OSTI)

      Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.; Perkins, William A.

      2010-12-01

      In 2003, an extension of the existing ice and trash sluiceway was added at Bonneville Powerhouse 2 (B2). This extension started at the existing corner collector for the ice and trash sluiceway adjacent to Bonneville Powerhouse 2 and the new sluiceway was extended to the downstream end of Cascade Island. The sluiceway was designed to improve juvenile salmon survival by bypassing turbine passage at B2, and placing these smolt in downstream flowing water minimizing their exposure to fish and avian predators. In this study, a previously developed computational fluid dynamics model was modified and used to characterized tailrace hydraulics and sluiceway egress conditions for low total river flows and low levels of spillway flow. STAR-CD v4.10 was used for seven scenarios of low total river flow and low spill discharges. The simulation results were specifically examined to look at tailrace hydraulics at 5 ft below the tailwater elevation, and streamlines used to compare streamline pathways for streamlines originating in the corner collector outfall and adjacent to the outfall. These streamlines indicated that for all higher spill percentage cases (25% and greater) that streamlines from the corner collector did not approach the shoreline at the downstream end of Bradford Island. For the cases with much larger spill percentages, the streamlines from the corner collector were mid-channel or closer to the Washington shore as they moved downstream. Although at 25% spill at 75 kcfs total river, the total spill volume was sufficient to "cushion" the flow from the corner collector from the Bradford Island shore, areas of recirculation were modeled in the spillway tailrace. However, at the lowest flows and spill percentages, the streamlines from the B2 corner collector pass very close to the Bradford Island shore. In addition, the very flow velocity flows and large areas of recirculation greatly increase potential predator exposure of the spillway passed smolt. If there is

    2. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2015-03-30 20:55:24...

    3. Computer model for electrochemical cell performance loss over time in terms of capacity, power, and conductance (CPC)

      Energy Science and Technology Software Center (OSTI)

      2015-09-01

      Available capacity, power, and cell conductance figure centrally into performance characterization of electrochemical cells (such as Li-ion cells) over their service life. For example, capacity loss in Li-ion cells is due to a combination of mechanisms, including loss of free available lithium, loss of active host sites, shifts in the potential-capacity curve, etc. Further distinctions can be made regarding irreversible and reversible capacity loss mechanisms. There are tandem needs for accurate interpretation of capacity atmore » characterization conditions (cycling rate, temperature, etc.) and for robust self-consistent modeling techniques that can be used for diagnostic analysis of cell data as well as forecasting of future performance. Analogous issues exist for aging effects on cell conductance and available power. To address these needs, a modeling capability was developed that provides a systematic analysis of the contributing factors to battery performance loss over aging and to act as a regression/prediction platform for cell performance. The modeling basis is a summation of self-consistent chemical kinetics rate expressions, which as individual expressions each covers a distinct mechanism (e.g., loss of active host sites, lithium loss), but collectively account for the net loss of premier metrics (e.g., capacity) over time for a particular characterization condition. Specifically, sigmoid-based rate expressions are utilized to describe each contribution to performance loss. Through additional mathematical development another tier of expressions is derived and used to perform differential analyses and segregate irreversible versus reversible contributions, as well as to determine concentration profiles over cell aging for affected Li+ ion inventory and fraction of active sites that remain at each time step. Reversible fade components are surmised by comparing fade rates at fast versus slow cycling conditions. The model is easily utilized for predictive

    4. Computer and graphics modeling of heat transfer and phase change in a wall with randomly imbibed PCM

      SciTech Connect (OSTI)

      Solomon, A.D.

      1989-03-01

      We describe the theoretical basis and computer implementation of a simulation code for heat transfer and phase change in a rectangular 2-dimensional region in which PCM has been randomly placed with a preassigned volume fraction.

    5. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    6. Computational Tools for Accelerating Carbon Capture Process Development

      SciTech Connect (OSTI)

      Miller, David; Sahinidis, N.V,; Cozad, A; Lee, A; Kim, H; Morinelly, J.; Eslick, J.; Yuan, Z.

      2013-06-04

      This presentation reports development of advanced computational tools to accelerate next generation technology development. These tools are to develop an optimized process using rigorous models. They include: Process Models; Simulation-Based Optimization; Optimized Process; Uncertainty Quantification; Algebraic Surrogate Models; and Superstructure Optimization (Determine Configuration).

    7. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    8. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    9. Atomic-Scale Design of Iron Fischer-Tropsch Catalysts: A Combined Computational Chemistry, Experimental, and Microkinetic Modeling Approach

      SciTech Connect (OSTI)

      Manos Mavrikakis; James A. Dumesic; Amit A. Gokhale; Rahul P. Nabar; Calvin H. Bartholomew; Hu Zou; Brian Critchfield

      2006-03-03

      rate-determining steps. In the coming year, studies will focus on quantitative determination of the rates of kinetically-relevant elementary steps on Fe catalysts with/without K and Pt promoters and at various levels of Al{sub 2}O{sub 3} support, providing a database for understanding (1) effects of promoter and support on elementary kinetic parameters and (2) for validation of computational models that incorporate effects of surface structure and promoters. Kinetic parameters will be incorporated into a microkinetics model, enabling prediction of rate without invoking assumptions, e.g. of a rate-determining step or a most-abundant surface intermediate. Calculations using periodic, self-consistent Density Functional Theory (DFT) methods were performed on two model surfaces: (1) Fe(110) with 1/4 ML subsurface carbon, and (2) Fe(110) with 1/4 ML Pt adatoms. Reaction networks for FTS on these systems were characterized in full detail by evaluating the thermodynamics and kinetics of each elementary step. We discovered that subsurface C stabilizes all the reactive intermediates, in contrast to Pt, which destabilizes most of them. A comparative study of the reactivities of the modified-Fe surfaces against pure Fe is expected to yield a more comprehensive understanding of promotion mechanisms for FTS on Fe.

    10. Final report on LDRD project : elucidating performance of proton-exchange-membrane fuel cells via computational modeling with experimental discovery and validation.

      SciTech Connect (OSTI)

      Wang, Chao Yang (Pennsylvania State University, University Park, PA); Pasaogullari, Ugur (Pennsylvania State University, University Park, PA); Noble, David R.; Siegel, Nathan P.; Hickner, Michael A.; Chen, Ken Shuang

      2006-11-01

      In this report, we document the accomplishments in our Laboratory Directed Research and Development project in which we employed a technical approach of combining experiments with computational modeling and analyses to elucidate the performance of hydrogen-fed proton exchange membrane fuel cells (PEMFCs). In the first part of this report, we document our focused efforts on understanding water transport in and removal from a hydrogen-fed PEMFC. Using a transparent cell, we directly visualized the evolution and growth of liquid-water droplets at the gas diffusion layer (GDL)/gas flow channel (GFC) interface. We further carried out a detailed experimental study to observe, via direct visualization, the formation, growth, and instability of water droplets at the GDL/GFC interface using a specially-designed apparatus, which simulates the cathode operation of a PEMFC. We developed a simplified model, based on our experimental observation and data, for predicting the onset of water-droplet instability at the GDL/GFC interface. Using a state-of-the-art neutron imaging instrument available at NIST (National Institute of Standard and Technology), we probed liquid-water distribution inside an operating PEMFC under a variety of operating conditions and investigated effects of evaporation due to local heating by waste heat on water removal. Moreover, we developed computational models for analyzing the effects of micro-porous layer on net water transport across the membrane and GDL anisotropy on the temperature and water distributions in the cathode of a PEMFC. We further developed a two-phase model based on the multiphase mixture formulation for predicting the liquid saturation, pressure drop, and flow maldistribution across the PEMFC cathode channels. In the second part of this report, we document our efforts on modeling the electrochemical performance of PEMFCs. We developed a constitutive model for predicting proton conductivity in polymer electrolyte membranes and compared

    11. Development and Verification of a Computational Fluid Dynamics Model of a Horizontal-Axis Tidal Current Turbine

      SciTech Connect (OSTI)

      Lawson, Mi. J.; Li, Y.; Sale, D. C.

      2011-01-01

      This paper describes the development of a computational fluid dynamics (CFD) methodology to simulate the hydrodynamics of horizontal-axis tidal current turbines (HATTs). First, an HATT blade was designed using the blade element momentum method in conjunction with a genetic optimization algorithm. Several unstructured computational grids were generated using this blade geometry and steady CFD simulations were used to perform a grid resolution study. Transient simulations were then performed to determine the effect of time-dependent flow phenomena and the size of the computational timestep on the numerical solution. Qualitative measures of the CFD solutions were independent of the grid resolution. Conversely, quantitative comparisons of the results indicated that the use of coarse computational grids results in an under prediction of the hydrodynamic forces on the turbine blade in comparison to the forces predicted using more resolved grids. For the turbine operating conditions considered in this study, the effect of the computational timestep on the CFD solution was found to be minimal, and the results from steady and transient simulations were in good agreement. Additionally, the CFD results were compared to corresponding blade element momentum method calculations and reasonable agreement was shown. Nevertheless, we expect that for other turbine operating conditions, where the flow over the blade is separated, transient simulations will be required.

    12. Development and Verification of a Computational Fluid Dynamics Model of a Horizontal-Axis Tidal Current Turbine

      SciTech Connect (OSTI)

      Lawson, M. J.; Li, Y.; Sale, D. C.

      2011-10-01

      This paper describes the development of a computational fluid dynamics (CFD) methodology to simulate the hydrodynamics of horizontal-axis tidal current turbines. Qualitative measures of the CFD solutions were independent of the grid resolution. Conversely, quantitative comparisons of the results indicated that the use of coarse computational grids results in an under prediction of the hydrodynamic forces on the turbine blade in comparison to the forces predicted using more resolved grids. For the turbine operating conditions considered in this study, the effect of the computational timestep on the CFD solution was found to be minimal, and the results from steady and transient simulations were in good agreement. Additionally, the CFD results were compared to corresponding blade element momentum method calculations and reasonable agreement was shown. Nevertheless, we expect that for other turbine operating conditions, where the flow over the blade is separated, transient simulations will be required.

    13. Seizure control with thermal energy? Modeling of heat diffusivity in brain tissue and computer-based design of a prototype mini-cooler.

      SciTech Connect (OSTI)

      Osario, I.; Chang, F.-C.; Gopalsami, N.; Nuclear Engineering Division; Univ. of Kansas

      2009-10-01

      Automated seizure blockage is a top priority in epileptology. Lowering nervous tissue temperature below a certain level suppresses abnormal neuronal activity, an approach with certain advantages over electrical stimulation, the preferred investigational therapy for pharmacoresistant seizures. A computer model was developed to identify an efficient probe design and parameters that would allow cooling of brain tissue by no less than 21 C in 30 s, maximum. The Pennes equation and the computer code ABAQUS were used to investigate the spatiotemporal behavior of heat diffusivity in brain tissue. Arrays of distributed probes deliver sufficient thermal energy to decrease, inhomogeneously, brain tissue temperature from 37 to 20 C in 30 s and from 37 to 15 C in 60 s. Tissue disruption/loss caused by insertion of this probe is considerably less than that caused by ablative surgery. This model may be applied for the design and development of cooling devices for seizure control.

    14. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

      SciTech Connect (OSTI)

      Zhang, Jing Ghate, Sujata V.; Yoon, Sora C.; Lo, Joseph Y.; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

      2014-09-15

      Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

    15. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

      SciTech Connect (OSTI)

      Xu, Zuwei; Zhao, Haibo Zheng, Chuguang

      2015-01-15

      This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

    16. Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Monte Carlo modeling it was found that for noisy signals with a significant background component, accuracy is improved by fitting the total emission data which includes the...

    17. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences and Engineering The Computational Sciences and Engineering Division (CSED) is ORNL's premier source of basic and applied research in the field of data sciences and knowledge discovery. CSED's science agenda is focused on research and development related to knowledge discovery enabled by the explosive growth in the availability, size, and variability of dynamic and disparate data sources. This science agenda encompasses data sciences as well as advanced modeling and

    18. Computer memory management system

      DOE Patents [OSTI]

      Kirk, III, Whitson John

      2002-01-01

      A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

    19. SCC: The Strategic Computing Complex

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SCC: The Strategic Computing Complex SCC: The Strategic Computing Complex The Strategic Computing Complex (SCC) is a secured supercomputing facility that supports the calculation, modeling, simulation, and visualization of complex nuclear weapons data in support of the Stockpile Stewardship Program. The 300,000-square-foot, vault-type building features an unobstructed 43,500-square-foot computer room, which is an open room about three-fourths the size of a football field. The Strategic Computing

    20. Climate Models: Rob Jacob | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      science & technology Environmental modeling tools Programs Mathematics, computing, & computer science Modeling, simulation, & visualization Rob Jacob, Computational Climate...

    1. Methods and computer executable instructions for rapidly calculating simulated particle transport through geometrically modeled treatment volumes having uniform volume elements for use in radiotherapy

      DOE Patents [OSTI]

      Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.

      2001-01-16

      Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of

    2. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These colors were

    3. Multi-processor including data flow accelerator module

      DOE Patents [OSTI]

      Davidson, George S.; Pierce, Paul E.

      1990-01-01

      An accelerator module for a data flow computer includes an intelligent memory. The module is added to a multiprocessor arrangement and uses a shared tagged memory architecture in the data flow computer. The intelligent memory module assigns locations for holding data values in correspondence with arcs leading to a node in a data dependency graph. Each primitive computation is associated with a corresponding memory cell, including a number of slots for operands needed to execute a primitive computation, a primitive identifying pointer, and linking slots for distributing the result of the cell computation to other cells requiring that result as an operand. Circuitry is provided for utilizing tag bits to determine automatically when all operands required by a processor are available and for scheduling the primitive for execution in a queue. Each memory cell of the module may be associated with any of the primitives, and the particular primitive to be executed by the processor associated with the cell is identified by providing an index, such as the cell number for the primitive, to the primitive lookup table of starting addresses. The module thus serves to perform functions previously performed by a number of sections of data flow architectures and coexists with conventional shared memory therein. A multiprocessing system including the module operates in a hybrid mode, wherein the same processing modules are used to perform some processing in a sequential mode, under immediate control of an operating system, while performing other processing in a data flow mode.

    4. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computer security Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved

    5. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    6. Analytical and computational study of the ideal full two-fluid plasma model and asymptotic approximations for Hall-magnetohydrodynamics

      SciTech Connect (OSTI)

      Srinivasan, B.; Shumlak, U.

      2011-09-15

      The 5-moment two-fluid plasma model uses Euler equations to describe the ion and electron fluids and Maxwell's equations to describe the electric and magnetic fields. Two-fluid physics becomes significant when the characteristic spatial scales are on the order of the ion skin depth and characteristic time scales are on the order of the ion cyclotron period. The full two-fluid plasma model has disparate characteristic speeds ranging from the ion and electron speeds of sound to the speed of light. Two asymptotic approximations are applied to the full two-fluid plasma to arrive at the Hall-MHD model, namely negligible electron inertia and infinite speed of light. The full two-fluid plasma model and the Hall-MHD model are studied for applications to an electromagnetic plasma shock, geospace environmental modeling (GEM challenge) magnetic reconnection, an axisymmetric Z-pinch, and an axisymmetric field reversed configuration (FRC).

    7. Numerical simulations for low energy nuclear reactions including...

      Office of Scientific and Technical Information (OSTI)

      Numerical simulations for low energy nuclear reactions including direct channels to validate statistical models Citation Details In-Document Search Title: Numerical simulations for ...

    8. Computational Procedures for Determining Parameters in Ramberg...

      Office of Scientific and Technical Information (OSTI)

      2 RAMBO: A Computer Code for Determining Parameters in Ramberg-Osgood Elastoplastic Model Based on Modulus and Damping Versus Strain ABSTRACT A computer code, RAMBO, is ...

    9. Mechanism and computational model for Lyman-{alpha}-radiation generation by high-intensity-laser four-wave mixing in Kr-Ar gas

      SciTech Connect (OSTI)

      Louchev, Oleg A.; Saito, Norihito; Wada, Satoshi; Bakule, Pavel; Yokoyama, Koji; Ishida, Katsuhiko; Iwasaki, Masahiko

      2011-09-15

      We present a theoretical model combined with a computational study of a laser four-wave mixing process under optical discharge in which the non-steady-state four-wave amplitude equations are integrated with the kinetic equations of initial optical discharge and electron avalanche ionization in Kr-Ar gas. The model is validated by earlier experimental data showing strong inhibition of the generation of pulsed, tunable Lyman-{alpha} (Ly-{alpha}) radiation when using sum-difference frequency mixing of 212.6 nm and tunable infrared radiation (820-850 nm). The rigorous computational approach to the problem reveals the possibility and mechanism of strong auto-oscillations in sum-difference resonant Ly-{alpha} generation due to the combined effect of (i) 212.6-nm (2+1)-photon ionization producing initial electrons, followed by (ii) the electron avalanche dominated by 843-nm radiation, and (iii) the final breakdown of the phase matching condition. The model shows that the final efficiency of Ly-{alpha} radiation generation can achieve a value of {approx}5x10{sup -4} which is restricted by the total combined absorption of the fundamental and generated radiation.

    10. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2015-01-27

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    11. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2014-12-30

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    12. Argonne's Laboratory computing resource center : 2006 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

      2007-05-31

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national

    13. An introduction to computer viruses

      SciTech Connect (OSTI)

      Brown, D.R.

      1992-03-01

      This report on computer viruses is based upon a thesis written for the Master of Science degree in Computer Science from the University of Tennessee in December 1989 by David R. Brown. This thesis is entitled An Analysis of Computer Virus Construction, Proliferation, and Control and is available through the University of Tennessee Library. This paper contains an overview of the computer virus arena that can help the reader to evaluate the threat that computer viruses pose. The extent of this threat can only be determined by evaluating many different factors. These factors include the relative ease with which a computer virus can be written, the motivation involved in writing a computer virus, the damage and overhead incurred by infected systems, and the legal implications of computer viruses, among others. Based upon the research, the development of a computer virus seems to require more persistence than technical expertise. This is a frightening proclamation to the computing community. The education of computer professionals to the dangers that viruses pose to the welfare of the computing industry as a whole is stressed as a means of inhibiting the current proliferation of computer virus programs. Recommendations are made to assist computer users in preventing infection by computer viruses. These recommendations support solid general computer security practices as a means of combating computer viruses.

    14. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    15. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      low-overhead operating system optimized for high performance computing called "Cray Linux Environment" (CLE). This OS supports only a limited number of system calls and UNIX...

    16. Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Advanced Materials Laboratory Center for Integrated Nanotechnologies Combustion Research Facility Computational Science Research Institute Joint BioEnergy Institute About EC News ...

    17. Impact-GMI Model

      Energy Science and Technology Software Center (OSTI)

      2007-03-22

      IMPACT-GMI is an atmospheric chemical transport model designed to run on massively parallel computers. It is designed to model trace pollutants in the atmosphere. It includes models for emission, chemistry and deposition of pollutants. It can be used to assess air quality and its impact on future climate change.

    18. DoE Early Career Research Program: Final Report: Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics

      SciTech Connect (OSTI)

      Farbin, Amir

      2015-07-15

      This is the final report of for DoE Early Career Research Program Grant Titled "Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics".

    19. BLT-EC (Breach, Leach and Transport-Equilibrium Chemistry) data input guide. A computer model for simulating release and coupled geochemical transport of contaminants from a subsurface disposal facility

      SciTech Connect (OSTI)

      MacKinnon, R.J.; Sullivan, T.M.; Kinsey, R.R.

      1997-05-01

      The BLT-EC computer code has been developed, implemented, and tested. BLT-EC is a two-dimensional finite element computer code capable of simulating the time-dependent release and reactive transport of aqueous phase species in a subsurface soil system. BLT-EC contains models to simulate the processes (container degradation, waste-form performance, transport, chemical reactions, and radioactive production and decay) most relevant to estimating the release and transport of contaminants from a subsurface disposal system. Water flow is provided through tabular input or auxiliary files. Container degradation considers localized failure due to pitting corrosion and general failure due to uniform surface degradation processes. Waste-form performance considers release to be limited by one of four mechanisms: rinse with partitioning, diffusion, uniform surface degradation, and solubility. Transport considers the processes of advection, dispersion, diffusion, chemical reaction, radioactive production and decay, and sources (waste form releases). Chemical reactions accounted for include complexation, sorption, dissolution-precipitation, oxidation-reduction, and ion exchange. Radioactive production and decay in the waste form is simulated. To improve the usefulness of BLT-EC, a pre-processor, ECIN, which assists in the creation of chemistry input files, and a post-processor, BLTPLOT, which provides a visual display of the data have been developed. BLT-EC also includes an extensive database of thermodynamic data that is also accessible to ECIN. This document reviews the models implemented in BLT-EC and serves as a guide to creating input files and applying BLT-EC.

    20. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and...

    1. Computer-Aided Engineering of Batteries for Designing Better Li-Ion Batteries (Presentation)

      SciTech Connect (OSTI)

      Pesaran, A.; Kim, G. H.; Smith, K.; Lee, K. J.; Santhanagopalan, S.

      2012-02-01

      This presentation describes the current status of the DOE's Energy Storage R and D program, including modeling and design tools and the Computer-Aided Engineering for Automotive Batteries (CAEBAT) program.

    2. Computed solid phases limiting the concentration of dissolved constituents in basalt aquifers of the Columbia Plateau in eastern Washington. Geochemical modeling and nuclide/rock/groundwater interaction studies

      SciTech Connect (OSTI)

      Deutsch, W.J.; Jenne, E.A.; Krupka, K.M.

      1982-08-01

      A speciation-solubility geochemical model, WATEQ2, was used to analyze geographically-diverse, ground-water samples from the aquifers of the Columbia Plateau basalts in eastern Washington. The ground-water samples compute to be at equilibrium with calcite, which provides both a solubility control for dissolved calcium and a pH buffer. Amorphic ferric hydroxide, Fe(OH)/sub 3/(A), is at saturation or modestly oversaturated in the few water samples with measured redox potentials. Most of the ground-water samples compute to be at equilibrium with amorphic silica (glass) and wairakite, a zeolite, and are saturated to oversaturated with respect to allophane, an amorphic aluminosilicate. The water samples are saturated to undersaturated with halloysite, a clay, and are variably oversaturated with regard to other secondary clay minerals. Equilibrium between the ground water and amorphic silica presumably results from the dissolution of the glassy matrix of the basalt. The oversaturation of the clay minerals other than halloysite indicates that their rate of formation lags the dissolution rate of the basaltic glass. The modeling results indicate that metastable amorphic solids limit the concentration of dissolved silicon and suggest the same possibility for aluminum and iron, and that the processes of dissolution of basaltic glass and formation of metastable secondary minerals are continuing even though the basalts are of Miocene age. The computed solubility relations are found to agree with the known assemblages of alteration minerals in the basalt fractures and vesicles. Because the chemical reactivity of the bedrock will influence the transport of solutes in ground water, the observed solubility equilibria are important factors with regard to chemical-retention processes associated with the possible migration of nuclear waste stored in the earth's crust.

    3. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center for Computational Sciences

    4. Quantitative Analysis of Biofuel Sustainability, Including Land...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions ...

    5. Atomic-Scale Design of Iron Fischer-Tropsch Catalysts; A Combined Computational Chemistry, Experimental, and Microkinetic Modeling Approach

      SciTech Connect (OSTI)

      Manos Mavrikakis; James Dumesic; Rahul Nabar; Calvin Bartholonew; Hu Zou; Uchenna Paul

      2008-09-29

      This work focuses on (1) searching/summarizing published Fischer-Tropsch synthesis (FTS) mechanistic and kinetic studies of FTS reactions on iron catalysts; (2) preparation and characterization of unsupported iron catalysts with/without potassium/platinum promoters; (3) measurement of H{sub 2} and CO adsorption/dissociation kinetics on iron catalysts using transient methods; (3) analysis of the transient rate data to calculate kinetic parameters of early elementary steps in FTS; (4) construction of a microkinetic model of FTS on iron, and (5) validation of the model from collection of steady-state rate data for FTS on iron catalysts. Three unsupported iron catalysts and three alumina-supported iron catalysts were prepared by non-aqueous-evaporative deposition (NED) or aqueous impregnation (AI) and characterized by chemisorption, BET, temperature-programmed reduction (TPR), extent-of-reduction, XRD, and TEM methods. These catalysts, covering a wide range of dispersions and metal loadings, are well-reduced and relatively thermally stable up to 500-600 C in H{sub 2} and thus ideal for kinetic and mechanistic studies. Kinetic parameters for CO adsorption, CO dissociation, and surface carbon hydrogenation on these catalysts were determined from temperature-programmed desorption (TPD) of CO and temperature programmed surface hydrogenation (TPSR), temperature-programmed hydrogenation (TPH), and isothermal, transient hydrogenation (ITH). A microkinetic model was constructed for the early steps in FTS on polycrystalline iron from the kinetic parameters of elementary steps determined experimentally in this work and from literature values. Steady-state rate data were collected in a Berty reactor and used for validation of the microkinetic model. These rate data were fitted to 'smart' Langmuir-Hinshelwood rate expressions derived from a sequence of elementary steps and using a combination of fitted steady-state parameters and parameters specified from the transient

    6. Power throttling of collections of computing elements

      DOE Patents [OSTI]

      Bellofatto, Ralph E.; Coteus, Paul W.; Crumley, Paul G.; Gara, Alan G.; Giampapa, Mark E.; Gooding; Thomas M.; Haring, Rudolf A.; Megerian, Mark G.; Ohmacht, Martin; Reed, Don D.; Swetz, Richard A.; Takken, Todd

      2011-08-16

      An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

    7. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the microgrid.

    8. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the .

    9. Comparison of Joint Modeling Approaches Including Eulerian Sliding...

      Office of Scientific and Technical Information (OSTI)

      However, once slip displacement on the joints become comparable to the zone size, Lagrangian (even non-conforming) meshes could suffer from tangling and decreased time step ...

    10. Unitarity bounds in the Higgs model including triplet fields...

      Office of Scientific and Technical Information (OSTI)

      in the Higgs potential, by which the electroweak rho parameter is unity at the tree level. ... Language: English Subject: 72 PHYSICS OF ELEMENTARY PARTICLES AND FIELDS; EIGENVALUES; ...

    11. Topic A Note: Includes STEPS Subtopic

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Topic A Note: Includes STEPS Subtopic 33 Total Projects Developing and Enhancing Workforce Training Programs

    12. Application of a watershed computer model to assess reclaimed landform stability in support of reclamation liability release

      SciTech Connect (OSTI)

      Peterson, M.R.; Zevenbergen, L.W.; Cochran, J.

      1995-09-01

      The Surface Mining Control and Reclamation Act of 1977 (SMCRA) instituted specific requirements for surface coal mine reclamation that included reclamation bonding and tied release of liability to achieving acceptable reclamation standards. Generally, such reclamation standards include successfully revegetating the site, achieving the approved postmine land use and minimizing disturbances to the prevailing hydrologic balance. For western surface coal mines the period of liability continues for a minimum of 10 years commencing with the last year of augmented seeding, fertilizing, irrigation or other work. This paper describes the methods and procedures conducted to evaluate the runoff and sediment yield response from approximately 2,700 acres of reclaimed lands at Peabody Western Coal Company`s (PWCC) Black Mesa Mine located near Kayenta, Arizona. These analyses were conducted in support of an application for liability release submitted to the Office of Surface Mining (OSM) for reclaimed interim land parcels within the 2,700 acres evaluated.

    13. Properties of a soft-core model of methanol: An integral equation theory and computer simulation study

      SciTech Connect (OSTI)

      Hu, Matej; Urbic, Tomaz; Muna, Gianmarco

      2014-10-28

      Thermodynamic and structural properties of a coarse-grained model of methanol are examined by Monte Carlo simulations and reference interaction site model (RISM) integral equation theory. Methanol particles are described as dimers formed from an apolar Lennard-Jones sphere, mimicking the methyl group, and a sphere with a core-softened potential as the hydroxyl group. Different closure approximations of the RISM theory are compared and discussed. The liquid structure of methanol is investigated by calculating site-site radial distribution functions and static structure factors for a wide range of temperatures and densities. Results obtained show a good agreement between RISM and Monte Carlo simulations. The phase behavior of methanol is investigated by employing different thermodynamic routes for the calculation of the RISM free energy, drawing gas-liquid coexistence curves that match the simulation data. Preliminary indications for a putative second critical point between two different liquid phases of methanol are also discussed.

    14. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    15. Compositional modeling in porous media using constant volume flash and flux computation without the need for phase identification

      SciTech Connect (OSTI)

      Polvka, Ond?ej Mikyka, Ji?

      2014-09-01

      The paper deals with the numerical solution of a compositional model describing compressible two-phase flow of a mixture composed of several components in porous media with species transfer between the phases. The mathematical model is formulated by means of the extended Darcy's laws for all phases, components continuity equations, constitutive relations, and appropriate initial and boundary conditions. The splitting of components among the phases is described using a new formulation of the local thermodynamic equilibrium which uses volume, temperature, and moles as specification variables. The problem is solved numerically using a combination of the mixed-hybrid finite element method for the total flux discretization and the finite volume method for the discretization of transport equations. A new approach to numerical flux approximation is proposed, which does not require the phase identification and determination of correspondence between the phases on adjacent elements. The time discretization is carried out by the backward Euler method. The resulting large system of nonlinear algebraic equations is solved by the NewtonRaphson iterative method. We provide eight examples of different complexity to show reliability and robustness of our approach.

    16. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    17. A Component Architecture for High-Performance Scientific Computing

      SciTech Connect (OSTI)

      Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

      2004-12-14

      The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

    18. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Exascale Computing CoDEx Project: A Hardware/Software Codesign Environment for the Exascale Era The next decade will see a rapid evolution of HPC node architectures as power and cooling constraints are limiting increases in microprocessor clock speeds and constraining data movement. Applications and algorithms will need to change and adapt as node architectures evolve. A key element of the strategy as we move forward is the co-design of applications, architectures and programming

    19. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    20. Collectively loading an application in a parallel computer

      DOE Patents [OSTI]

      Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Miller, Samuel J.; Mundy, Michael B.

      2016-01-05

      Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

    1. Atomic-Scale Design of Iron Fischer-Tropsch Catalysts: A Combined Computational Chemistry, Experimental, and Microkinetic Modeling Approach

      SciTech Connect (OSTI)

      Manos Mavrikakis; James A. Dumesic; Rahul P. Nabar

      2006-09-29

      Work continued on the development of a microkinetic model of Fischer-Tropsch synthesis (FTS) on supported and unsupported Fe catalysts. The following aspects of the FT mechanism on unsupported iron catalysts were investigated on during this third year: (1) the collection of rate data in a Berty CSTR reactor based on sequential design of experiments; (2) CO adsorption and CO-TPD for obtaining the heat of adsorption of CO on polycrystalline iron; and (3) isothermal hydrogenation (IH) after Fischer Tropsch reaction to identify and quantify surface carbonaceous species. Rates of C{sub 2+} formation on unsupported iron catalysts at 220 C and 20 atm correlated well to a Langmuir-Hinshelwood type expression, derived assuming carbon hydrogenation to CH and OH recombination to water to be rate-determining steps. From desorption of molecularly adsorbed CO at different temperatures the heat of adsorption of CO on polycrystalline iron was determined to be 100 kJ/mol. Amounts and types of carbonaceous species formed after FT reaction for 5-10 minutes at 150, 175, 200 and 285 C vary significantly with temperature. Mr. Brian Critchfield completed his M.S. thesis work on a statistically designed study of the kinetics of FTS on 20% Fe/alumina. Preparation of a paper describing this work is in progress. Results of these studies were reported at the Annual Meeting of the Western States Catalysis and at the San Francisco AIChE meeting. In the coming period, studies will focus on quantitative determination of the rates of kinetically-relevant elementary steps on unsupported Fe catalysts with/without K and Pt promoters by SSITKA method. This study will help us to (1) understand effects of promoter and support on elementary kinetic parameters and (2) build a microkinetics model for FTS on iron. Calculations using periodic, self-consistent Density Functional Theory (DFT) methods were performed on models of defected Fe surfaces, most significantly the stepped Fe(211) surface. Binding

    2. Dedicated heterogeneous node scheduling including backfill scheduling

      DOE Patents [OSTI]

      Wood, Robert R.; Eckert, Philip D.; Hommes, Gregg

      2006-07-25

      A method and system for job backfill scheduling dedicated heterogeneous nodes in a multi-node computing environment. Heterogeneous nodes are grouped into homogeneous node sub-pools. For each sub-pool, a free node schedule (FNS) is created so that the number of to chart the free nodes over time. For each prioritized job, using the FNS of sub-pools having nodes useable by a particular job, to determine the earliest time range (ETR) capable of running the job. Once determined for a particular job, scheduling the job to run in that ETR. If the ETR determined for a lower priority job (LPJ) has a start time earlier than a higher priority job (HPJ), then the LPJ is scheduled in that ETR if it would not disturb the anticipated start times of any HPJ previously scheduled for a future time. Thus, efficient utilization and throughput of such computing environments may be increased by utilizing resources otherwise remaining idle.

    3. Semiconductor Device Analysis on Personal Computers

      Energy Science and Technology Software Center (OSTI)

      1993-02-08

      PC-1D models the internal operation of bipolar semiconductor devices by solving for the concentrations and quasi-one-dimensional flow of electrons and holes resulting from either electrical or optical excitation. PC-1D uses the same detailed physical models incorporated in mainframe computer programs, yet runs efficiently on personal computers. PC-1D was originally developed with DOE funding to analyze solar cells. That continues to be its primary mode of usage, with registered copies in regular use at more thanmore » 100 locations worldwide. The program has been successfully applied to the analysis of silicon, gallium-arsenide, and indium-phosphide solar cells. The program is also suitable for modeling bipolar transistors and diodes, including heterojunction devices. Its easy-to-use graphical interface makes it useful as a teaching tool as well.« less

    4. Light output measurements and computational models of microcolumnar CsI scintillators for x-ray imaging

      SciTech Connect (OSTI)

      Nillius, Peter Klamra, Wlodek; Danielsson, Mats; Sibczynski, Pawel; Sharma, Diksha; Badano, Aldo

      2015-02-15

      Purpose: The authors report on measurements of light output and spatial resolution of microcolumnar CsI:Tl scintillator detectors for x-ray imaging. In addition, the authors discuss the results of simulations aimed at analyzing the results of synchrotron and sealed-source exposures with respect to the contributions of light transport to the total light output. Methods: The authors measured light output from a 490-?m CsI:Tl scintillator screen using two setups. First, the authors used a photomultiplier tube (PMT) to measure the response of the scintillator to sealed-source exposures. Second, the authors performed imaging experiments with a 27-keV monoenergetic synchrotron beam and a slit to calculate the total signal generated in terms of optical photons per keV. The results of both methods are compared to simulations obtained with hybridMANTIS, a coupled x-ray, electron, and optical photon Monte Carlo transport package. The authors report line response (LR) and light output for a range of linear absorption coefficients and describe a model that fits at the same time the light output and the blur measurements. Comparing the experimental results with the simulations, the authors obtained an estimate of the absorption coefficient for the model that provides good agreement with the experimentally measured LR. Finally, the authors report light output simulation results and their dependence on scintillator thickness and reflectivity of the backing surface. Results: The slit images from the synchrotron were analyzed to obtain a total light output of 48 keV{sup ?1} while measurements using the fast PMT instrument setup and sealed-sources reported a light output of 28 keV{sup ?1}. The authors attribute the difference in light output estimates between the two methods to the difference in time constants between the camera and PMT measurements. Simulation structures were designed to match the light output measured with the camera while providing good agreement with the

    5. Proposal for grid computing for nuclear applications

      SciTech Connect (OSTI)

      Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat; Muhamad, Shalina Bt. Sheik; Hassan, Hasni; Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri; and others

      2014-02-12

      The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

    6. Free energy of RNA-counterion interactions in a tight-binding model computed by a discrete space mapping

      SciTech Connect (OSTI)

      Henke, Paul S.; Mak, Chi H.

      2014-08-14

      The thermodynamic stability of a folded RNA is intricately tied to the counterions and the free energy of this interaction must be accounted for in any realistic RNA simulations. Extending a tight-binding model published previously, in this paper we investigate the fundamental structure of charges arising from the interaction between small functional RNA molecules and divalent ions such as Mg{sup 2+} that are especially conducive to stabilizing folded conformations. The characteristic nature of these charges is utilized to construct a discretely connected energy landscape that is then traversed via a novel application of a deterministic graph search technique. This search method can be incorporated into larger simulations of small RNA molecules and provides a fast and accurate way to calculate the free energy arising from the interactions between an RNA and divalent counterions. The utility of this algorithm is demonstrated within a fully atomistic Monte Carlo simulation of the P4-P6 domain of the Tetrahymena group I intron, in which it is shown that the counterion-mediated free energy conclusively directs folding into a compact structure.

    7. Radiological Safety Analysis Computer Program

      Energy Science and Technology Software Center (OSTI)

      2001-08-28

      RSAC-6 is the latest version of the RSAC program. It calculates the consequences of a release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory; decay and in-grow the inventory during transport through processes, facilities, and the environment; model the downwind dispersion of the activity; and calculate doses to downwind individuals. Internal dose from the inhalation and ingestion pathways is calculated. External dose from ground surface andmore » plume gamma pathways is calculated. New and exciting updates to the program include the ability to evaluate a release to an enclosed room, resuspension of deposited activity and evaluation of a release up to 1 meter from the release point. Enhanced tools are included for dry deposition, building wake, occupancy factors, respirable fraction, AMAD adjustment, updated and enhanced radionuclide inventory and inclusion of the dose-conversion factors from FOR 11 and 12.« less

    8. Computational Nuclear Structure | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Excellent scaling is achieved by the production Automatic Dynamic Load Balancing (ADLB) library on the BG/P. Computational Nuclear Structure PI Name: David Dean Hai Nam PI Email: namha@ornl.gov deandj@ornl.gov Institution: Oak Ridge National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 15 Million Year: 2010 Research Domain: Physics Researchers from Oak Ridge and Argonne national laboratories are using complementary techniques, including Green's Function Monte Carlo, the No

    9. INSTRUMENTATION, INCLUDING NUCLEAR AND PARTICLE DETECTORS; RADIATION

      Office of Scientific and Technical Information (OSTI)

      interval technical basis document Chiaro, P.J. Jr. 44 INSTRUMENTATION, INCLUDING NUCLEAR AND PARTICLE DETECTORS; RADIATION DETECTORS; RADIATION MONITORS; DOSEMETERS;...

    10. MHK technologies include current energy conversion

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Research projects often involve highly collaborative partnerships between Sandia, industry, and academia to respond quickly with impactful results. Reference Model Project Sandia ...

    11. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Oak Ridge Climate Change Science Institute Jim Hack Oak Ridge National Laboratory (ORNL) has formed the Oak Ridge Climate Change Science Institute (ORCCSI) that will develop and execute programs for the multi-agency, multi-disciplinary climate change research partnerships at ORNL. Led by Director Jim Hack and Deputy Director Dave Bader, the Institute will integrate scientific projects in modeling, observations, and experimentation with ORNL's powerful computational and informatics capabilities

    12. Cloud computing security.

      SciTech Connect (OSTI)

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    13. Large-eddy simulation of the Rayleigh-Taylor instability on a massively parallel computer

      SciTech Connect (OSTI)

      Amala, P.A.K.

      1995-03-01

      A computational model for the solution of the three-dimensional Navier-Stokes equations is developed. This model includes a turbulence model: a modified Smagorinsky eddy-viscosity with a stochastic backscatter extension. The resultant equations are solved using finite difference techniques: the second-order explicit Lax-Wendroff schemes. This computational model is implemented on a massively parallel computer. Programming models on massively parallel computers are next studied. It is desired to determine the best programming model for the developed computational model. To this end, three different codes are tested on a current massively parallel computer: the CM-5 at Los Alamos. Each code uses a different programming model: one is a data parallel code; the other two are message passing codes. Timing studies are done to determine which method is the fastest. The data parallel approach turns out to be the fastest method on the CM-5 by at least an order of magnitude. The resultant code is then used to study a current problem of interest to the computational fluid dynamics community. This is the Rayleigh-Taylor instability. The Lax-Wendroff methods handle shocks and sharp interfaces poorly. To this end, the Rayleigh-Taylor linear analysis is modified to include a smoothed interface. The linear growth rate problem is then investigated. Finally, the problem of the randomly perturbed interface is examined. Stochastic backscatter breaks the symmetry of the stationary unstable interface and generates a mixing layer growing at the experimentally observed rate. 115 refs., 51 figs., 19 tabs.

    14. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Resources This page is the repository for sundry items of information relevant to general computing on BooNE. If you have a question or problem that isn't answered here, or a suggestion for improving this page or the information on it, please mail boone-computing@fnal.gov and we'll do our best to address any issues. Note about this page Some links on this page point to www.everything2.com, and are meant to give an idea about a concept or thing without necessarily wading through a whole website

    15. A Research Roadmap for Computation-Based Human Reliability Analysis

      SciTech Connect (OSTI)

      Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

      2015-08-01

      The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

    16. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    17. Low-frequency computational electromagnetics for antenna analysis

      SciTech Connect (OSTI)

      Miller, E.K. ); Burke, G.J. )

      1991-01-01

      An overview of low-frequency, computational methods for modeling the electromagnetic characteristics of antennas is presented here. The article presents a brief analytical background, and summarizes the essential ingredients of the method of moments, for numerically solving low-frequency antenna problems. Some extensions to the basic models of perfectly conducting objects in free space are also summarized, followed by a consideration of some of the same computational issues that affect model accuracy, efficiency and utility. A variety of representative computations are then presented to illustrate various modeling aspects and capabilities that are currently available. A fairly extensive bibliography is included to suggest further reference material to the reader. 90 refs., 27 figs.

    18. Computational trigonometry

      SciTech Connect (OSTI)

      Gustafson, K.

      1994-12-31

      By means of the author`s earlier theory of antieigenvalues and antieigenvectors, a new computational approach to iterative methods is presented. This enables an explicit trigonometric understanding of iterative convergence and provides new insights into the sharpness of error bounds. Direct applications to Gradient descent, Conjugate gradient, GCR(k), Orthomin, CGN, GMRES, CGS, and other matrix iterative schemes will be given.

    19. Magnetic resonance imaging and computational fluid dynamics (CFD) simulations of rabbit nasal airflows for the development of hybrid CFD/PBPK models

      SciTech Connect (OSTI)

      Corley, Richard A.; Minard, Kevin R.; Kabilan, Senthil; Einstein, Daniel R.; Kuprat, Andrew P.; harkema, J. R.; Kimbell, Julia; Gargas, M. L.; Kinzell, John H.

      2009-06-01

      The percentages of total air?ows over the nasal respiratory and olfactory epithelium of female rabbits were cal-culated from computational ?uid dynamics (CFD) simulations of steady-state inhalation. These air?ow calcula-tions, along with nasal airway geometry determinations, are critical parameters for hybrid CFD/physiologically based pharmacokinetic models that describe the nasal dosimetry of water-soluble or reactive gases and vapors in rabbits. CFD simulations were based upon three-dimensional computational meshes derived from magnetic resonance images of three adult female New Zealand White (NZW) rabbits. In the anterior portion of the nose, the maxillary turbinates of rabbits are considerably more complex than comparable regions in rats, mice, mon-keys, or humans. This leads to a greater surface area to volume ratio in this region and thus the potential for increased extraction of water soluble or reactive gases and vapors in the anterior portion of the nose compared to many other species. Although there was considerable interanimal variability in the ?ne structures of the nasal turbinates and air?ows in the anterior portions of the nose, there was remarkable consistency between rabbits in the percentage of total inspired air?ows that reached the ethmoid turbinate region (~50%) that is presumably lined with olfactory epithelium. These latter results (air?ows reaching the ethmoid turbinate region) were higher than previous published estimates for the male F344 rat (19%) and human (7%). These di?erences in regional air?ows can have signi?cant implications in interspecies extrapolations of nasal dosimetry.

    20. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, ... The DOE Office of Science's Advanced Scientific Computing Research (ASCR) program ...