Multiscale Subsurface Biogeochemical Modeling
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Biogeochemical Modeling Multiscale Subsurface Biogeochemical Modeling ScheibeSmaller.jpg Simulation of flow inside an experimental packed bed, performed on Franklin Key...
Peridynamic Multiscale Finite Element Methods
Costa, Timothy; Bond, Stephen D.; Littlewood, David John; Moore, Stan Gerald
2015-12-01
The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic and local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the art of local models with the flexibility and accuracy of the nonlocal peridynamic model. In the mixed locality method this coupling occurs across scales, so that the nonlocal model can be used to communicate material heterogeneity at scales inappropriate to local partial differential equation models. Additionally, the computational burden of the weak form of the peridynamic model is reduced dramatically by only requiring that the model be solved on local patches of the simulation domain which may be computed in parallel, taking advantage of the heterogeneous nature of next generation computing platforms. Addition- ally, we present a novel Galerkin framework, the 'Ambulant Galerkin Method', which represents a first step towards a unified mathematical analysis of local and nonlocal multiscale finite element methods, and whose future extension will allow the analysis of multiscale finite element methods that mix models across scales under certain assumptions of the consistency of those models.
Multiscale Thermohydrologic Model
T. Buscheck
2004-10-12
The purpose of the multiscale thermohydrologic model (MSTHM) is to predict the possible range of thermal-hydrologic conditions, resulting from uncertainty and variability, in the repository emplacement drifts, including the invert, and in the adjoining host rock for the repository at Yucca Mountain. Thus, the goal is to predict the range of possible thermal-hydrologic conditions across the repository; this is quite different from predicting a single expected thermal-hydrologic response. The MSTHM calculates the following thermal-hydrologic parameters: temperature, relative humidity, liquid-phase saturation, evaporation rate, air-mass fraction, gas-phase pressure, capillary pressure, and liquid- and gas-phase fluxes (Table 1-1). These thermal-hydrologic parameters are required to support ''Total System Performance Assessment (TSPA) Model/Analysis for the License Application'' (BSC 2004 [DIRS 168504]). The thermal-hydrologic parameters are determined as a function of position along each of the emplacement drifts and as a function of waste package type. These parameters are determined at various reference locations within the emplacement drifts, including the waste package and drip-shield surfaces and in the invert. The parameters are also determined at various defined locations in the adjoining host rock. The MSTHM uses data obtained from the data tracking numbers (DTNs) listed in Table 4.1-1. The majority of those DTNs were generated from the following analyses and model reports: (1) ''UZ Flow Model and Submodels'' (BSC 2004 [DIRS 169861]); (2) ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004); (3) ''Calibrated Properties Model'' (BSC 2004 [DIRS 169857]); (4) ''Thermal Conductivity of the Potential Repository Horizon'' (BSC 2004 [DIRS 169854]); (5) ''Thermal Conductivity of the Non-Repository Lithostratigraphic Layers'' (BSC 2004 [DIRS 170033]); (6) ''Ventilation Model and Analysis Report'' (BSC 2004 [DIRS 169862]); (7) ''Heat Capacity Analysis Report'' (BSC 2004 [DIRS 170003]).
Wagner, Gregory John; Collis, Samuel Scott; Templeton, Jeremy Alan; Lehoucq, Richard B.; Parks, Michael L.; Jones, Reese E.; Silling, Stewart Andrew; Scovazzi, Guglielmo; Bochev, Pavel B.
2007-10-01
This report is a collection of documents written as part of the Laboratory Directed Research and Development (LDRD) project A Mathematical Framework for Multiscale Science and Engineering: The Variational Multiscale Method and Interscale Transfer Operators. We present developments in two categories of multiscale mathematics and analysis. The first, continuum-to-continuum (CtC) multiscale, includes problems that allow application of the same continuum model at all scales with the primary barrier to simulation being computing resources. The second, atomistic-to-continuum (AtC) multiscale, represents applications where detailed physics at the atomistic or molecular level must be simulated to resolve the small scales, but the effect on and coupling to the continuum level is frequently unclear.
MULTISCALE THERMOHYDROLOGIC MODEL
T. Buscheck
2005-07-07
The intended purpose of the multiscale thermohydrologic model (MSTHM) is to predict the possible range of thermal-hydrologic conditions, resulting from uncertainty and variability, in the repository emplacement drifts, including the invert, and in the adjoining host rock for the repository at Yucca Mountain. The goal of the MSTHM is to predict a reasonable range of possible thermal-hydrologic conditions within the emplacement drift. To be reasonable, this range includes the influence of waste-package-to-waste-package heat output variability relevant to the license application design, as well as the influence of uncertainty and variability in the geologic and hydrologic conditions relevant to predicting the thermal-hydrologic response in emplacement drifts. This goal is quite different from the goal of a model to predict a single expected thermal-hydrologic response. As a result, the development and validation of the MSTHM and the associated analyses using this model are focused on the goal of predicting a reasonable range of thermal-hydrologic conditions resulting from parametric uncertainty and waste-package-to-waste-package heat-output variability. Thermal-hydrologic conditions within emplacement drifts depend primarily on thermal-hydrologic conditions in the host rock at the drift wall and on the temperature difference between the drift wall and the drip-shield and waste-package surfaces. Thus, the ability to predict a reasonable range of relevant in-drift MSTHM output parameters (e.g., temperature and relative humidity) is based on valid predictions of thermal-hydrologic processes in the host rock, as well as valid predictions of heat-transfer processes between the drift wall and the drip-shield and waste-package surfaces. Because the invert contains crushed gravel derived from the host rock, the invert is, in effect, an extension of the host rock, with thermal and hydrologic properties that have been modified by virtue of the crushing (and the resulting geometry of the gravel grains). Thus, given that reasonable invert properties are applied, the ability to predict a reasonable range of relevant MSTHM output parameters for the invert are based on valid predictions of thermal-hydrologic processes in the host rock. The MSTHM calculates the following thermal-hydrologic parameters: temperature, relative humidity, liquid-phase saturation, evaporation rate, air-mass fraction, gas-phase pressure, capillary pressure, and liquid- and gas-phase fluxes. The thermal-hydrologic parameters used to support ''Total System Performance Assessment (TSPA) Model/Analysis for the License Application'' are identified in Table 1-1. The thermal-hydrologic parameters are determined as a function of position along each of the emplacement drifts and as a function of waste package type. These parameters are determined at various reference locations within the emplacement drifts, including the waste package and drip-shield surfaces and in the invert. The parameters are also determined at various defined locations in the adjoining host rock.
X. Frank Xu
2010-03-30
Multiscale modeling of stochastic systems, or uncertainty quantization of multiscale modeling is becoming an emerging research frontier, with rapidly growing engineering applications in nanotechnology, biotechnology, advanced materials, and geo-systems, etc. While tremendous efforts have been devoted to either stochastic methods or multiscale methods, little combined work had been done on integration of multiscale and stochastic methods, and there was no method formally available to tackle multiscale problems involving uncertainties. By developing an innovative Multiscale Stochastic Finite Element Method (MSFEM), this research has made a ground-breaking contribution to the emerging field of Multiscale Stochastic Modeling (MSM) (Fig 1). The theory of MSFEM basically decomposes a boundary value problem of random microstructure into a slow scale deterministic problem and a fast scale stochastic one. The slow scale problem corresponds to common engineering modeling practices where fine-scale microstructure is approximated by certain effective constitutive constants, which can be solved by using standard numerical solvers. The fast scale problem evaluates fluctuations of local quantities due to random microstructure, which is important for scale-coupling systems and particularly those involving failure mechanisms. The Green-function-based fast-scale solver developed in this research overcomes the curse-of-dimensionality commonly met in conventional approaches, by proposing a random field-based orthogonal expansion approach. The MSFEM formulated in this project paves the way to deliver the first computational tool/software on uncertainty quantification of multiscale systems. The applications of MSFEM on engineering problems will directly enhance our modeling capability on materials science (composite materials, nanostructures), geophysics (porous media, earthquake), biological systems (biological tissues, bones, protein folding). Continuous development of MSFEM will further contribute to the establishment of Multiscale Stochastic Modeling strategy, and thereby potentially to bring paradigm-shifting changes to simulation and modeling of complex systems cutting across multidisciplinary fields.
Towards a Multiscale Approach to Cybersecurity Modeling
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.
A multiscale two-point flux-approximation method
Myner, Olav Lie, Knut-Andreas
2014-10-15
A large number of multiscale finite-volume methods have been developed over the past decade to compute conservative approximations to multiphase flow problems in heterogeneous porous media. In particular, several iterative and algebraic multiscale frameworks that seek to reduce the fine-scale residual towards machine precision have been presented. Common for all such methods is that they rely on a compatible primaldual coarse partition, which makes it challenging to extend them to stratigraphic and unstructured grids. Herein, we propose a general idea for how one can formulate multiscale finite-volume methods using only a primal coarse partition. To this end, we use two key ingredients that are computed numerically: (i) elementary functions that correspond to flow solutions used in transmissibility upscaling, and (ii) partition-of-unity functions used to combine elementary functions into basis functions. We exemplify the idea by deriving a multiscale two-point flux-approximation (MsTPFA) method, which is robust with regards to strong heterogeneities in the permeability field and can easily handle general grids with unstructured fine- and coarse-scale connections. The method can easily be adapted to arbitrary levels of coarsening, and can be used both as a standalone solver and as a preconditioner. Several numerical experiments are presented to demonstrate that the MsTPFA method can be used to solve elliptic pressure problems on a wide variety of geological models in a robust and efficient manner.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO2 and comparing themore » predictions with experiments.« less
Final Report for Integrated Multiscale Modeling of Molecular Computing Devices
Glotzer, Sharon C.
2013-08-28
In collaboration with researchers at Vanderbilt University, North Carolina State University, Princeton and Oakridge National Laboratory we developed multiscale modeling and simulation methods capable of modeling the synthesis, assembly, and operation of molecular electronics devices. Our role in this project included the development of coarse-grained molecular and mesoscale models and simulation methods capable of simulating the assembly of millions of organic conducting molecules and other molecular components into nanowires, crossbars, and other organized patterns.
Multiscale Concrete Modeling of Aging Degradation
Hammi, Yousseff; Gullett, Philipp; Horstemeyer, Mark F.
2015-07-31
In this work a numerical finite element framework is implemented to enable the integration of coupled multiscale and multiphysics transport processes. A User Element subroutine (UEL) in Abaqus is used to simultaneously solve stress equilibrium, heat conduction, and multiple diffusion equations for 2D and 3D linear and quadratic elements. Transport processes in concrete structures and their degradation mechanisms are presented along with the discretization of the governing equations. The multiphysics modeling framework is theoretically extended to the linear elastic fracture mechanics (LEFM) by introducing the eXtended Finite Element Method (XFEM) and based on the XFEM user element implementation of Giner et al. [2009]. A damage model that takes into account the damage contribution from the different degradation mechanisms is theoretically developed. The total contribution of damage is forwarded to a Multi-Stage Fatigue (MSF) model to enable the assessment of the fatigue life and the deterioration of reinforced concrete structures in a nuclear power plant. Finally, two examples are presented to illustrate the developed multiphysics user element implementation and the XFEM implementation of Giner et al. [2009].
Moist multi-scale models for the hurricane embryo
Majda, Andrew J. [New York University; Xing, Yulong [ORNL; Mohammadian, Majid [University of Ottawa, Canada
2010-01-01
Determining the finite-amplitude preconditioned states in the hurricane embryo, which lead to tropical cyclogenesis, is a central issue in contemporary meteorology. In the embryo there is competition between different preconditioning mechanisms involving hydrodynamics and moist thermodynamics, which can lead to cyclogenesis. Here systematic asymptotic methods from applied mathematics are utilized to develop new simplified moist multi-scale models starting from the moist anelastic equations. Three interesting multi-scale models emerge in the analysis. The balanced mesoscale vortex (BMV) dynamics and the microscale balanced hot tower (BHT) dynamics involve simplified balanced equations without gravity waves for vertical vorticity amplification due to moist heat sources and incorporate nonlinear advective fluxes across scales. The BMV model is the central one for tropical cyclogenesis in the embryo. The moist mesoscale wave (MMW) dynamics involves simplified equations for mesoscale moisture fluctuations, as well as linear hydrostatic waves driven by heat sources from moisture and eddy flux divergences. A simplified cloud physics model for deep convection is introduced here and used to study moist axisymmetric plumes in the BHT model. A simple application in periodic geometry involving the effects of mesoscale vertical shear and moist microscale hot towers on vortex amplification is developed here to illustrate features of the coupled multi-scale models. These results illustrate the use of these models in isolating key mechanisms in the embryo in a simplified content.
A multilevel multiscale mimetic method for an anisotropic infiltration problem
Lipnikov, Konstantin; Moulton, David; Svyatskiy, Daniil
2009-01-01
Modeling of multiphase flow and transport in highly heterogeneous porous media must capture a broad range of coupled spatial and temporal scales. Recently, a hierarchical approach dubbed the Multilevel Multiscale Mimetic (M3) method, was developed to simulate two-phase flow in porous media. The M{sup 3} method is locally mass conserving at all levels in its hierarchy, it supports unstructured polygonal grids and full tensor permeabilities, and it can achieve large coarsening factors. In this work we consider infiltration of water into a two-dimensional layered medium. The grid is aligned with the layers but not the coordinate axes. We demonstrate that with an efficient temporal updating strategy for the coarsening parameters, fine-scale accuracy of prominent features in the flow is maintained by the M{sup 3} method.
Gao, Kai; Fu, Shubin; Gibson, Richard L.; Chung, Eric T.; Efendiev, Yalchin
2015-04-14
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale medium property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gao, Kai; Fu, Shubin; Gibson, Richard L.; Chung, Eric T.; Efendiev, Yalchin
2015-04-14
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less
Smith, Kandler; Graf, Peter; Jun, Myungsoo; Yang, Chuanbo; Li, Genong; Li, Shaoping; Hochman, Amit; Tselepidakis, Dimitrios
2015-06-09
This presentation provides an update on improvements in computational efficiency in a nonlinear multiscale battery model for computer aided engineering.
Fast multiscale Gaussian beam methods for wave equations in bounded convex domains
Bao, Gang; Department of Mathematics, Michigan State University, East Lansing, MI 48824 ; Lai, Jun; Qian, Jianliang
2014-03-15
Motivated by fast multiscale Gaussian wavepacket transforms and multiscale Gaussian beam methods which were originally designed for pure initial-value problems of wave equations, we develop fast multiscale Gaussian beam methods for initial boundary value problems of wave equations in bounded convex domains in the high frequency regime. To compute the wave propagation in bounded convex domains, we have to take into account reflecting multiscale Gaussian beams, which are accomplished by enforcing reflecting boundary conditions during beam propagation and carrying out suitable reflecting beam summation. To propagate multiscale beams efficiently, we prove that the ratio of the squared magnitude of beam amplitude and the beam width is roughly conserved, and accordingly we propose an effective indicator to identify significant beams. We also prove that the resulting multiscale Gaussian beam methods converge asymptotically. Numerical examples demonstrate the accuracy and efficiency of the method.
Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling
Du, Qiang
2014-11-12
The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next generation atomistic-to-continuum multiscale simulations. In addition, a rigorous studyof nite element discretizations of peridynamics will be considered. Using the fact that peridynamics is spatially derivative free, we will also characterize the space of admissible peridynamic solutions and carry out systematic analyses of the models, in particular rigorously showing how peridynamics encompasses fracture and other failure phenomena. Additional aspects of the project include the mathematical and numerical analysis of peridynamics applied to stochastic peridynamics models. In summary, the project will make feasible mathematically consistent multiscale models for the analysis and design of advanced materials.
Multi-scale Modeling of Plasticity in Tantalum.
Lim, Hojun; Battaile, Corbett Chandler.; Carroll, Jay; Buchheit, Thomas E.; Boyce, Brad; Weinberger, Christopher
2015-12-01
In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct and quantitative comparisons between experimental measurements and simulation show that the proposed model accurately captures plasticity in deformation of polycrystalline tantalum.
Kim, G.; Pesaran, A.; Smith, K.; Graf, P.; Jun, M.; Yang, C.; Li, G.; Li, S.; Hochman, A.; Tselepidakis, D.; White, J.
2014-06-01
This presentation discusses the significant enhancement of computational efficiency in nonlinear multiscale battery model for computer aided engineering in current research at NREL.
Multi-Scale Multi-Dimensional Model for Better Cell Design and Management (Presentation)
Kim, G.-H.; Smith, K.
2008-09-01
Describes NREL's R&D to develop a multi-scale model to assist in designing better, more reliable lithium-ion battery cells for advanced vehicles.
Multi-Scale Multi-Dimensional Ion Battery Performance Model
Energy Science and Technology Software Center (OSTI)
2007-05-07
The Multi-Scale Multi-Dimensional (MSMD) Lithium Ion Battery Model allows for computer prediction and engineering optimization of thermal, electrical, and electrochemical performance of lithium ion cells with realistic geometries. The model introduces separate simulation domains for different scale physics, achieving much higher computational efficiency compared to the single domain approach. It solves a one dimensional electrochemistry model in a micro sub-grid system, and captures the impacts of macro-scale battery design factors on cell performance and materialmore » usage by solving cell-level electron and heat transports in a macro grid system.« less
Multiscale modeling for fluid transport in nanosystems.
Lee, Jonathan W.; Jones, Reese E.; Mandadapu, Kranthi Kiran; Templeton, Jeremy Alan; Zimmerman, Jonathan A.
2013-09-01
Atomistic-scale behavior drives performance in many micro- and nano-fluidic systems, such as mircrofludic mixers and electrical energy storage devices. Bringing this information into the traditionally continuum models used for engineering analysis has proved challenging. This work describes one such approach to address this issue by developing atomistic-to-continuum multi scale and multi physics methods to enable molecular dynamics (MD) representations of atoms to incorporated into continuum simulations. Coupling is achieved by imposing constraints based on fluxes of conserved quantities between the two regions described by one of these models. The impact of electric fields and surface charges are also critical, hence, methodologies to extend finite-element (FE) MD electric field solvers have been derived to account for these effects. Finally, the continuum description can have inconsistencies with the coarse-grained MD dynamics, so FE equations based on MD statistics were derived to facilitate the multi scale coupling. Examples are shown relevant to nanofluidic systems, such as pore flow, Couette flow, and electric double layer.
Fluid simulations with atomistic resolution: a hybrid multiscale method with field-wise coupling
Borg, Matthew K. [Department of Mechanical and Aerospace Engineering, University of Strathclyde, Glasgow G1 1XJ (United Kingdom)] [Department of Mechanical and Aerospace Engineering, University of Strathclyde, Glasgow G1 1XJ (United Kingdom); Lockerby, Duncan A., E-mail: duncan.lockerby@warwick.ac.uk [School of Engineering, University of Warwick, Coventry CV4 7AL (United Kingdom); Reese, Jason M., E-mail: jason.reese@strath.ac.uk [Department of Mechanical and Aerospace Engineering, University of Strathclyde, Glasgow G1 1XJ (United Kingdom)
2013-12-15
We present a new hybrid method for simulating dense fluid systems that exhibit multiscale behaviour, in particular, systems in which a NavierStokes model may not be valid in parts of the computational domain. We apply molecular dynamics as a local microscopic refinement for correcting the NavierStokes constitutive approximation in the bulk of the domain, as well as providing a direct measurement of velocity slip at bounding surfaces. Our hybrid approach differs from existing techniques, such as the heterogeneous multiscale method (HMM), in some fundamental respects. In our method, the individual molecular solvers, which provide information to the macro model, are not coupled with the continuum grid at nodes (i.e. point-wise coupling), instead coupling occurs over distributed heterogeneous fields (here referred to as field-wise coupling). This affords two major advantages. Whereas point-wise coupled HMM is limited to regions of flow that are highly scale-separated in all spatial directions (i.e. where the state of non-equilibrium in the fluid can be adequately described by a single strain tensor and temperature gradient vector), our field-wise coupled HMM has no such limitations and so can be applied to flows with arbitrarily-varying degrees of scale separation (e.g. flow from a large reservoir into a nano-channel). The second major advantage is that the position of molecular elements does not need to be collocated with nodes of the continuum grid, which means that the resolution of the microscopic correction can be adjusted independently of the resolution of the continuum model. This in turn means the computational cost and accuracy of the molecular correction can be independently controlled and optimised. The macroscopic constraints on the individual molecular solvers are artificial body-force distributions, used in conjunction with standard periodicity. We test our hybrid method on the Poiseuille flow problem for both Newtonian (Lennard-Jones) and non-Newtonian (FENE) fluids. The multiscale results are validated with expensive full-scale molecular dynamics simulations of the same case. Very close agreement is obtained for all cases, with as few as two micro elements required to accurately capture both the Newtonian and non-Newtonian flowfields. Our multiscale method converges very quickly (within 34 iterations) and is an order of magnitude more computationally efficient than the full-scale simulation.
Multi-Scale Multi-physics Methods Development for the Calculation...
Office of Scientific and Technical Information (OSTI)
The objective of this project is to improve the accuracy in the prediction of local "hot" spots by developing multi-scale, multi-physics methods and implementing them within the ...
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gao, Kai; Chung, Eric T.; Gibson, Richard L.; Fu, Shubin; Efendiev, Yalchin
2015-06-05
The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elasticmore » wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.« less
Gao, Kai; Chung, Eric T.; Gibson, Richard L.; Fu, Shubin; Efendiev, Yalchin
2015-06-05
The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elastic wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.
Integrated Multiscale Modeling of Molecular Computing Devices
Weinan E
2012-03-29
The main bottleneck in modeling transport in molecular devices is to develop the correct formulation of the problem and efficient algorithms for analyzing the electronic structure and dynamics using, for example, the time-dependent density functional theory. We have divided this task into several steps. The first step is to developing the right mathematical formulation and numerical algorithms for analyzing the electronic structure using density functional theory. The second step is to study time-dependent density functional theory, particularly the far-field boundary conditions. The third step is to study electronic transport in molecular devices. We are now at the end of the first step. Under DOE support, we have made subtantial progress in developing linear scaling and sub-linear scaling algorithms for electronic structure analysis. Although there has been a huge amount of effort in the past on developing linear scaling algorithms, most of the algorithms developed suffer from the lack of robustness and controllable accuracy. We have made the following progress: (1) We have analyzed thoroughly the localization properties of the wave-functions. We have developed a clear understanding of the physical as well as mathematical origin of the decay properties. One important conclusion is that even for metals, one can choose wavefunctions that decay faster than any algebraic power. (2) We have developed algorithms that make use of these localization properties. Our algorithms are based on non-orthogonal formulations of the density functional theory. Our key contribution is to add a localization step into the algorithm. The addition of this localization step makes the algorithm quite robust and much more accurate. Moreover, we can control the accuracy of these algorithms by changing the numerical parameters. (3) We have considerably improved the Fermi operator expansion (FOE) approach. Through pole expansion, we have developed the optimal scaling FOE algorithm.
Multiscale Design of Advanced Materials based on Hybrid Ab Initio and Quasicontinuum Methods
Luskin, Mitchell
2014-03-12
This project united researchers from mathematics, chemistry, computer science, and engineering for the development of new multiscale methods for the design of materials. Our approach was highly interdisciplinary, but it had two unifying themes: first, we utilized modern mathematical ideas about change-of-scale and state-of-the-art numerical analysis to develop computational methods and codes to solve real multiscale problems of DOE interest; and, second, we took very seriously the need for quantum mechanics-based atomistic forces, and based our methods on fast solvers of chemically accurate methods.
A Mathematical Analysis of Atomistic-to-Continuum (AtC) Multiscale Coupling Methods
Gunzburger, Max
2013-11-13
We have worked on several projects aimed at improving the efficiency and understanding of multiscale methods, especially those applicable to problems involving atomistic-to-continuum coupling. Activities include blending methods for AtC coupling and efficient quasi-continuum methods for problems with long-range interactions.
A MULTISCALE, CELL-BASED FRAMEWORK FOR MODELING CANCER DEVELOPMENT
JIANG, YI
2007-01-16
Cancer remains to be one of the leading causes of death due to diseases. We use a systems approach that combines mathematical modeling, numerical simulation, in vivo and in vitro experiments, to develop a predictive model that medical researchers can use to study and treat cancerous tumors. The multiscale, cell-based model includes intracellular regulations, cellular level dynamics and intercellular interactions, and extracellular level chemical dynamics. The intracellular level protein regulations and signaling pathways are described by Boolean networks. The cellular level growth and division dynamics, cellular adhesion and interaction with the extracellular matrix is described by a lattice Monte Carlo model (the Cellular Potts Model). The extracellular dynamics of the signaling molecules and metabolites are described by a system of reaction-diffusion equations. All three levels of the model are integrated through a hybrid parallel scheme into a high-performance simulation tool. The simulation results reproduce experimental data in both avasular tumors and tumor angiogenesis. By combining the model with experimental data to construct biologically accurate simulations of tumors and their vascular systems, this model will enable medical researchers to gain a deeper understanding of the cellular and molecular interactions associated with cancer progression and treatment.
An improved multiscale model for dilute turbulent gas particle flows based
Office of Scientific and Technical Information (OSTI)
on the equilibration of energy concept (Thesis/Dissertation) | SciTech Connect Thesis/Dissertation: An improved multiscale model for dilute turbulent gas particle flows based on the equilibration of energy concept Citation Details In-Document Search Title: An improved multiscale model for dilute turbulent gas particle flows based on the equilibration of energy concept Many particle-laden flows in engineering applications involve turbulent gas flows. Modeling multiphase turbulent flows is an
SU-F-18C-15: Model-Based Multiscale Noise Reduction On Low Dose Cone Beam Projection
Yao, W; Farr, J
2014-06-15
Purpose: To improve image quality of low dose cone beam CT for patient positioning in radiation therapy. Methods: In low dose cone beam CT (CBCT) imaging systems, Poisson process governs the randomness of photon fluence at x-ray source and the detector because of the independent binomial process of photon absorption in medium. On a CBCT projection, the variance of fluence consists of the variance of noiseless imaging structure and that of Poisson noise, which is proportional to the mean (noiseless) of the fluence at the detector. This requires multiscale filters to smoothen noise while keeping the structure information of the imaged object. We used a mathematical model of Poisson process to design multiscale filters and established the balance of noise correction and structure blurring. The algorithm was checked with low dose kilo-voltage CBCT projections acquired from a Varian OBI system. Results: From the investigation of low dose CBCT of a Catphan phantom and patients, it showed that our model-based multiscale technique could efficiently reduce noise and meanwhile keep the fine structure of the imaged object. After the image processing, the number of visible line pairs in Catphan phantom scanned with 4 ms pulse time was similar to that scanned with 32 ms, and soft tissue structure from simulated 4 ms patient head-and-neck images was also comparable with scanned 20 ms ones. Compared with fixed-scale technique, the image quality from multiscale one was improved. Conclusion: Use of projection-specific multiscale filters can reach better balance on noise reduction and structure information loss. The image quality of low dose CBCT can be improved by using multiscale filters.
Multi-Scale Multi-physics Methods Development for the Calculation of
Office of Scientific and Technical Information (OSTI)
Hot-Spots in the NGNP (Technical Report) | SciTech Connect Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP Citation Details In-Document Search Title: Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is
Evaluation of the Multi-Scale Modeling Framework using Data from the
Office of Scientific and Technical Information (OSTI)
Atmospheric Radiation Measurement Program (Conference) | SciTech Connect Evaluation of the Multi-Scale Modeling Framework using Data from the Atmospheric Radiation Measurement Program Citation Details In-Document Search Title: Evaluation of the Multi-Scale Modeling Framework using Data from the Atmospheric Radiation Measurement Program One of the goals of the Atmospheric Radiation Measurement (ARM) program was to provide long-term observations for evaluation of cloud and radiation treatment
Evaluation of the Multi-scale Modeling Framework Using Data from the
Office of Scientific and Technical Information (OSTI)
Atmospheric Radiation Measurement Program (Journal Article) | SciTech Connect Evaluation of the Multi-scale Modeling Framework Using Data from the Atmospheric Radiation Measurement Program Citation Details In-Document Search Title: Evaluation of the Multi-scale Modeling Framework Using Data from the Atmospheric Radiation Measurement Program One of the goals of the Atmospheric Radiation Measurement (ARM) program is to provide long-term observations for evaluating and improving cloud and
Using Multi-scale Dynamic Rupture Models to Improve Ground Motion
Office of Scientific and Technical Information (OSTI)
Estimates: ALCF-2 Early Science Program Technical Report (Technical Report) | SciTech Connect Technical Report: Using Multi-scale Dynamic Rupture Models to Improve Ground Motion Estimates: ALCF-2 Early Science Program Technical Report Citation Details In-Document Search Title: Using Multi-scale Dynamic Rupture Models to Improve Ground Motion Estimates: ALCF-2 Early Science Program Technical Report Authors: Ely, G.P. [1] + Show Author Affiliations (LCF) [LCF Publication Date: 2013-10-31 OSTI
Multiscale Modeling of the Orthotropic Behaviour of PA6-6 overmoulded Composites using MMI Approach
Bikard, Jerome; Robert, Gilles; Moulinjeune, Olivier [RHODIA ENGINEERING PLASTICS, Technyl Application Center Avenue Ramboz, BP 64, 69192 Saint FONS CEDEX (France)
2011-05-04
In this study the MMI ConfidentDesign multiscale approach (consisting in a non-linear multiscale simulation based on DIGIMAT registered including the injection modeling of the filled polymer and a multiscale mechanical model using the fiber orientation tensor resulting from the injection) has been combined with an orthotropic damageable elastic simulation. The anisotropic properties (including rupture criterion) are estimated and a multiscale simulation including the heterogeneous material properties issued from injection process is done. The impact of fiber ratios is then investigated. The structural simulation predicts stresses localized close to the punch, as well in injected PA66 than in composite part. Greater the fiber volume ratio, greater the modulus and more brittle the composite.
Components for Atomistic-to-Continuum Multiscale Modeling of Flow in Micro- and Nanofluidic Systems
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Adalsteinsson, Helgi; Debusschere, Bert J.; Long, Kevin R.; Najm, Habib N.
2008-01-01
Micro- and nanofluidics pose a series of significant challenges for science-based modeling. Key among those are the wide separation of length- and timescales between interface phenomena and bulk flow and the spatially heterogeneous solution properties near solid-liquid interfaces. It is not uncommon for characteristic scales in these systems to span nine orders of magnitude from the atomic motions in particle dynamics up to evolution of mass transport at the macroscale level, making explicit particle models intractable for all but the simplest systems. Recently, atomistic-to-continuum (A2C) multiscale simulations have gained a lot of interest as an approach to rigorously handle particle-levelmore » dynamics while also tracking evolution of large-scale macroscale behavior. While these methods are clearly not applicable to all classes of simulations, they are finding traction in systems in which tight-binding, and physically important, dynamics at system interfaces have complex effects on the slower-evolving large-scale evolution of the surrounding medium. These conditions allow decomposition of the simulation into discrete domains, either spatially or temporally. In this paper, we describe how features of domain decomposed simulation systems can be harnessed to yield flexible and efficient software for multiscale simulations of electric field-driven micro- and nanofluidics.« less
Bayesian data assimilation for stochastic multiscale models of transport in porous media.
Marzouk, Youssef M.; van Bloemen Waanders, Bart Gustaaf; Parno, Matthew; Ray, Jaideep; Lefantzi, Sophia; Salazar, Luke; McKenna, Sean Andrew; Klise, Katherine A.
2011-10-01
We investigate Bayesian techniques that can be used to reconstruct field variables from partial observations. In particular, we target fields that exhibit spatial structures with a large spectrum of lengthscales. Contemporary methods typically describe the field on a grid and estimate structures which can be resolved by it. In contrast, we address the reconstruction of grid-resolved structures as well as estimation of statistical summaries of subgrid structures, which are smaller than the grid resolution. We perform this in two different ways (a) via a physical (phenomenological), parameterized subgrid model that summarizes the impact of the unresolved scales at the coarse level and (b) via multiscale finite elements, where specially designed prolongation and restriction operators establish the interscale link between the same problem defined on a coarse and fine mesh. The estimation problem is posed as a Bayesian inverse problem. Dimensionality reduction is performed by projecting the field to be inferred on a suitable orthogonal basis set, viz. the Karhunen-Loeve expansion of a multiGaussian. We first demonstrate our techniques on the reconstruction of a binary medium consisting of a matrix with embedded inclusions, which are too small to be grid-resolved. The reconstruction is performed using an adaptive Markov chain Monte Carlo method. We find that the posterior distributions of the inferred parameters are approximately Gaussian. We exploit this finding to reconstruct a permeability field with long, but narrow embedded fractures (which are too fine to be grid-resolved) using scalable ensemble Kalman filters; this also allows us to address larger grids. Ensemble Kalman filtering is then used to estimate the values of hydraulic conductivity and specific yield in a model of the High Plains Aquifer in Kansas. Strong conditioning of the spatial structure of the parameters and the non-linear aspects of the water table aquifer create difficulty for the ensemble Kalman filter. We conclude with a demonstration of the use of multiscale stochastic finite elements to reconstruct permeability fields. This method, though computationally intensive, is general and can be used for multiscale inference in cases where a subgrid model cannot be constructed.
A multiscale method for the analysis of defect behavior in MO during electron irradiation
Rest, J.; Insepov, Z.; Ye, B.; Yun, D.
2014-10-01
In order to overcome a lack of experimental information on values for key materials properties and kinetic coefficients, a multiscale modeling approach is applied to defect behavior in irradiated Mo where key materials properties, such as point defect (vacancy and interstitial) migration enthalpies as well as kinetic factors such as dimer formation, defect recombination, and self interstitial–interstitial loop interaction coefficients, are obtained by molecular dynamics calculations and implemented into rate-theory simulations of defect behavior. The multiscale methodology is validated against interstitial loop growth data obtained from electron irradiation of pure Mo. It is shown that the observed linear behavior of the loop diameter vs. the square root of irradiation time is a direct consequence of the 1D migration of self-interstitial atoms.
Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling
Gunzburger, Max
2015-02-17
We have treated the modeling, analysis, numerical analysis, and algorithmic development for nonlocal models of diffusion and mechanics. Variational formulations were developed and finite element methods were developed based on those formulations for both steady state and time dependent problems. Obstacle problems and optimization problems for the nonlocal models were also treated and connections made with fractional derivative models.
Collaborating for Multi-Scale Chemical Science
William H. Green
2006-07-14
Advanced model reduction methods were developed and integrated into the CMCS multiscale chemical science simulation software. The new technologies were used to simulate HCCI engines and burner flames with exceptional fidelity.
A second gradient theoretical framework for hierarchical multiscale modeling of materials
Luscher, Darby J; Bronkhorst, Curt A; Mc Dowell, David L
2009-01-01
A theoretical framework for the hierarchical multiscale modeling of inelastic response of heterogeneous materials has been presented. Within this multiscale framework, the second gradient is used as a non local kinematic link between the response of a material point at the coarse scale and the response of a neighborhood of material points at the fine scale. Kinematic consistency between these scales results in specific requirements for constraints on the fluctuation field. The wryness tensor serves as a second-order measure of strain. The nature of the second-order strain induces anti-symmetry in the first order stress at the coarse scale. The multiscale ISV constitutive theory is couched in the coarse scale intermediate configuration, from which an important new concept in scale transitions emerges, namely scale invariance of dissipation. Finally, a strategy for developing meaningful kinematic ISVs and the proper free energy functions and evolution kinetics is presented.
A voxel-based multiscale model to simulate the radiation response of hypoxic tumors
Espinoza, I.; Peschke, P.; Karger, C. P.
2015-01-15
Purpose: In radiotherapy, it is important to predict the response of tumors to irradiation prior to the treatment. This is especially important for hypoxic tumors, which are known to be highly radioresistant. Mathematical modeling based on the dose distribution, biological parameters, and medical images may help to improve this prediction and to optimize the treatment plan. Methods: A voxel-based multiscale tumor response model for simulating the radiation response of hypoxic tumors was developed. It considers viable and dead tumor cells, capillary and normal cells, as well as the most relevant biological processes such as (i) proliferation of tumor cells, (ii) hypoxia-induced angiogenesis, (iii) spatial exchange of cells leading to tumor growth, (iv) oxygen-dependent cell survival after irradiation, (v) resorption of dead cells, and (vi) spatial exchange of cells leading to tumor shrinkage. Oxygenation is described on a microscopic scale using a previously published tumor oxygenation model, which calculates the oxygen distribution for each voxel using the vascular fraction as the most important input parameter. To demonstrate the capabilities of the model, the dependence of the oxygen distribution on tumor growth and radiation-induced shrinkage is investigated. In addition, the impact of three different reoxygenation processes is compared and tumor control probability (TCP) curves for a squamous cells carcinoma of the head and neck (HNSSC) are simulated under normoxic and hypoxic conditions. Results: The model describes the spatiotemporal behavior of the tumor on three different scales: (i) on the macroscopic scale, it describes tumor growth and shrinkage during radiation treatment, (ii) on a mesoscopic scale, it provides the cell density and vascular fraction for each voxel, and (iii) on the microscopic scale, the oxygen distribution may be obtained in terms of oxygen histograms. With increasing tumor size, the simulated tumors develop a hypoxic core. Within the model, tumor shrinkage was found to be significantly more important for reoxygenation than angiogenesis or decreased oxygen consumption due to an increased fraction of dead cells. In the studied HNSSC-case, the TCD{sub 50} values (dose at 50% TCP) decreased from 71.0 Gy under hypoxic to 53.6 Gy under the oxic condition. Conclusions: The results obtained with the developed multiscale model are in accordance with expectations based on radiobiological principles and clinical experience. As the model is voxel-based, radiological imaging methods may help to provide the required 3D-characterization of the tumor prior to irradiation. For clinical application, the model has to be further validated with experimental and clinical data. If this is achieved, the model may be used to optimize fractionation schedules and dose distributions for the treatment of hypoxic tumors.
Frequency-domain multiscale quantum mechanics/electromagnetics simulation method
Meng, Lingyi; Yin, Zhenyu; Yam, ChiYung E-mail: ghc@everest.hku.hk; Koo, SiuKong; Chen, GuanHua E-mail: ghc@everest.hku.hk; Chen, Quan; Wong, Ngai
2013-12-28
A frequency-domain quantum mechanics and electromagnetics (QM/EM) method is developed. Compared with the time-domain QM/EM method [Meng et al., J. Chem. Theory Comput. 8, 11901199 (2012)], the newly developed frequency-domain QM/EM method could effectively capture the dynamic properties of electronic devices over a broader range of operating frequencies. The system is divided into QM and EM regions and solved in a self-consistent manner via updating the boundary conditions at the QM and EM interface. The calculated potential distributions and current densities at the interface are taken as the boundary conditions for the QM and EM calculations, respectively, which facilitate the information exchange between the QM and EM calculations and ensure that the potential, charge, and current distributions are continuous across the QM/EM interface. Via Fourier transformation, the dynamic admittance calculated from the time-domain and frequency-domain QM/EM methods is compared for a carbon nanotube based molecular device.
A Multiscale Modeling Approach to Analyze Filament-Wound Composite Pressure Vessels
Nguyen, Ba Nghiep; Simmons, Kevin L.
2013-07-22
A multiscale modeling approach to analyze filament-wound composite pressure vessels is developed in this article. The approach, which extends the Nguyen et al. model [J. Comp. Mater. 43 (2009) 217] developed for discontinuous fiber composites to continuous fiber ones, spans three modeling scales. The microscale considers the unidirectional elastic fibers embedded in an elastic-plastic matrix obeying the Ramberg-Osgood relation and J2 deformation theory of plasticity. The mesoscale behavior representing the composite lamina is obtained through an incremental Mori-Tanaka type model and the Eshelby equivalent inclusion method [Proc. Roy. Soc. Lond. A241 (1957) 376]. The implementation of the micro-meso constitutive relations in the ABAQUS finite element package (via user subroutines) allows the analysis of a filament-wound composite pressure vessel (macroscale) to be performed. Failure of the composite lamina is predicted by a criterion that accounts for the strengths of the fibers and of the matrix as well as of their interface. The developed approach is demonstrated in the analysis of a filament-wound pressure vessel to study the effect of the lamina thickness on the burst pressure. The predictions are favorably compared to the numerical and experimental results by Lifshitz and Dayan [Comp. Struct. 32 (1995) 313].
Horstemeyer, Mark R.; Chaudhuri, Santanu
2015-09-30
A multiscale modeling Internal State Variable (ISV) constitutive model was developed that captures the fundamental structure-property relationships. The macroscale ISV model used lower length scale simulations (Butler-Volmer and Electronics Structures results) in order to inform the ISVs at the macroscale. The chemomechanical ISV model was calibrated and validated from experiments with magnesium (Mg) alloys that were investigated under corrosive environments coupled with experimental electrochemical studies. Because the ISV chemomechanical model is physically based, it can be used for other material systems to predict corrosion behavior. As such, others can use the chemomechanical model for analyzing corrosion effects on their designs.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Sondak, D.; Shadid, J. N.; Oberai, A. A.; Pawlowski, R. P.; Cyr, E. C.; Smith, T. M.
2015-04-29
New large eddy simulation (LES) turbulence models for incompressible magnetohydrodynamics (MHD) derived from the variational multiscale (VMS) formulation for finite element simulations are introduced. The new models include the variational multiscale formulation, a residual-based eddy viscosity model, and a mixed model that combines both of these component models. Each model contains terms that are proportional to the residual of the incompressible MHD equations and is therefore numerically consistent. Moreover, each model is also dynamic, in that its effect vanishes when this residual is small. The new models are tested on the decaying MHD Taylor Green vortex at low and highmore » Reynolds numbers. The evaluation of the models is based on comparisons with available data from direct numerical simulations (DNS) of the time evolution of energies as well as energy spectra at various discrete times. Thus a numerical study, on a sequence of meshes, is presented that demonstrates that the large eddy simulation approaches the DNS solution for these quantities with spatial mesh refinement.« less
Sondak, D.; Shadid, J. N.; Oberai, A. A.; Pawlowski, R. P.; Cyr, E. C.; Smith, T. M.
2015-04-29
New large eddy simulation (LES) turbulence models for incompressible magnetohydrodynamics (MHD) derived from the variational multiscale (VMS) formulation for finite element simulations are introduced. The new models include the variational multiscale formulation, a residual-based eddy viscosity model, and a mixed model that combines both of these component models. Each model contains terms that are proportional to the residual of the incompressible MHD equations and is therefore numerically consistent. Moreover, each model is also dynamic, in that its effect vanishes when this residual is small. The new models are tested on the decaying MHD Taylor Green vortex at low and high Reynolds numbers. The evaluation of the models is based on comparisons with available data from direct numerical simulations (DNS) of the time evolution of energies as well as energy spectra at various discrete times. Thus a numerical study, on a sequence of meshes, is presented that demonstrates that the large eddy simulation approaches the DNS solution for these quantities with spatial mesh refinement.
Carbon Capture Simulation Initiative: A Case Study in Multi-Scale Modeling and New Challenges
Miller, David C; Syamlal, Madhava; Zitney, Stephen E.
2014-06-07
Abstract: Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative is a partnership among national laboratories, industry and universities that is developing and deploying a suite of multi-scale modeling and simulation tools including basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework, high-resolution filtered computational-fluid-dynamic (CFD) submodels, validated high-fidelity device-scale CFD models with quantified uncertainty, and a risk analysis framework. These tools and models enable basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to synthesize and optimize a process. The resulting process informs the development of process control systems and more detailed simulations of potential equipment to improve the design and reduce scale-up risk. Quantification and propagation of uncertainty across scales is an essential part of these tools and models.
Carbon Capture Simulation Initiative: A Case Study in Multi-Scale Modeling and New Challenges
Miller, David; Syamlal, Madhava; Mebane, David; Storlie, Curtis; Bhattacharyya, Debangsu; Sahinidis, Nikolaos V.; Agarwal, Deborah A.; Tong, Charles; Zitney, Stephen E.; Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran; Ryan, Emily M.; Engel, David W.; Dale, Crystal
2014-04-01
Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative is a partnership among national laboratories, industry and universities that is developing and deploying a suite of multi-scale modeling and simulation tools including basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework, high-resolution filtered computational-fluid-dynamic (CFD) submodels, validated high-fidelity device-scale CFD models with quantified uncertainty, and a risk analysis framework. These tools and models enable basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to synthesize and optimize a process. The resulting process informs the development of process control systems and more detailed simulations of potential equipment to improve the design and reduce scale-up risk. Quantification and propagation of uncertainty across scales is an essential part of these tools and models.
An improved multiscale model for dilute turbulent gas particle...
Office of Scientific and Technical Information (OSTI)
(1) model for interphase TKE transfer, especially the time scale of interphase TKE transfer, and (2) correct prediction of TKE evolution with variation of particle Stokes number. ...
Assessment of Multi-Scale T/H Codes and Models for DNB CP
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Assessment of Multi-Scale Thermal-Hydraulic Codes and Models for DNB Challenge Problem Applications L3.AMA.CP.P8.01 Yixing Sung, Jin Yan, Liping Cao, Vefa N. Kucukboyaci, Emre Tatli Westinghouse Electric Company LLC Mark A. Christon, Jozsef Bakosi Los Alamos National Laboratories Robert K. Salko Oak Ridge National Laboratories Hongbin Zhang Idaho National Laboratory March 31, 2014 CASL-U-2014-0032-000 L3.AMA.CP.P8.01 ii CASL-U-2014-0032-000 REVISION LOG Revision Date Affected Pages Revision
Wang, Yuan; Wang, Minghuai; Zhang, Renyi; Ghan, Steven J.; Lin, Yun; Hu, Jiaxi; Pan, Bowen; Levy, Misti; Jiang, Jonathan; Molina, Mario J.
2014-05-13
Atmospheric aerosols impact weather and global general circulation by modifying cloud and precipitation processes, but the magnitude of cloud adjustment by aerosols remains poorly quantified and represents the largest uncertainty in estimated forcing of climate change. Here we assess the impacts of anthropogenic aerosols on the Pacific storm track using a multi-scale global aerosol-climate model (GCM). Simulations of two aerosol scenarios corresponding to the present day and pre-industrial conditions reveal long-range transport of anthropogenic aerosols across the north Pacific and large resulting changes in the aerosol optical depth, cloud droplet number concentration, and cloud and ice water paths. Shortwave and longwave cloud radiative forcing at the top of atmosphere are changed by - 2.5 and + 1.3 W m-2, respectively, by emission changes from pre-industrial to present day, and an increased cloud-top height indicates invigorated mid-latitude cyclones. The overall increased precipitation and poleward heat transport reflect intensification of the Pacific storm track by anthropogenic aerosols. Hence, this work provides for the first time a global perspective of the impacts of Asian pollution outflows from GCMs. Furthermore, our results suggest that the multi-scale modeling framework is essential in producing the aerosol invigoration effect of deep convective clouds on the global scale.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain
2015-05-27
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less
T.A> Buscheck; Y. Sun; Y. Hao
2006-03-28
The MultiScale ThermoHydrologic Model (MSTHM) predicts thermal-hydrologic (TH) conditions within emplacement tunnels (drifts) and in the adjoining host rock at Yucca Mountain, Nevada, which is the proposed site for a radioactive waste repository in the US. Because these predictions are used in the performance assessment of the Yucca Mountain repository, they must address the influence of variability and uncertainty of the engineered- and natural-system parameters that significantly influence those predictions. Parameter-sensitivity studies show that the MSTHM predictions adequately propagate the influence of parametric variability and uncertainty. Model-validation studies show that the influence of conceptual-model uncertainty on the MSTHM predictions is insignificant compared to that of parametric uncertainty, which is propagated through the MSTHM.
Multiscale Multiphysics Lithium-Ion Battery Model with Multidomain Modular Framework
Kim, G. H.
2013-01-01
Lithium-ion batteries (LIBs) powering recent wave of personal ubiquitous electronics are also believed to be a key enabler of electrification of vehicle powertrain on the path toward sustainable transportation future. Over the past several years, National Renewable Energy Laboratory (NREL) has developed the Multi-Scale Multi-Domain (MSMD) model framework, which is an expandable platform and a generic modularized flexible framework resolving interactions among multiple physics occurring in varied length and time scales in LIB[1]. NREL has continued to enhance the functionality of the framework and to develop constituent models in the context of the MSMD framework responding to U.S. Department of Energy's CAEBAT program objectives. This talk will introduce recent advancements in NREL's LIB modeling research in regards of scale-bridging, multi-physics integration, and numerical scheme developments.
Uncertainty Quantification and Management for Multi-scale Nuclear Materials Modeling
McDowell, David; Deo, Chaitanya; Zhu, Ting; Wang, Yan
2015-10-21
Understanding and improving microstructural mechanical stability in metals and alloys is central to the development of high strength and high ductility materials for cladding and cores structures in advanced fast reactors. Design and enhancement of radiation-induced damage tolerant alloys are facilitated by better understanding the connection of various unit processes to collective responses in a multiscale model chain, including: dislocation nucleation, absorption and desorption at interfaces; vacancy production, radiation-induced segregation of Cr and Ni at defect clusters (point defect sinks) in BCC Fe-Cr ferritic/martensitic steels; investigation of interaction of interstitials and vacancies with impurities (V, Nb, Ta, Mo, W, Al, Si, P, S); time evolution of swelling (cluster growth) phenomena of irradiated materials; and energetics and kinetics of dislocation bypass of defects formed by interstitial clustering and formation of prismatic loops, informing statistical models of continuum character with regard to processes of dislocation glide, vacancy agglomeration and swelling, climb and cross slip.
Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.; Lee, K. J.
2012-05-01
This 2012 Annual Merit Review presentation gives an overview of the Computer-Aided Engineering of Batteries (CAEBAT) project and introduces the Multi-Scale, Multi-Dimensional model for modeling lithium-ion batteries for electric vehicles.
Anh Bui; Nam Dinh; Brian Williams
2013-09-01
In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Such sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this works calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the CIPS Validation Data Plan at the Consortium for Advanced Simulation of LWRs to enable quantitative assessment of the CASL modeling of Crud-Induced Power Shift (CIPS) phenomenon, in particular, and the CASL advanced predictive capabilities, in general. This report is prepared for the Department of Energys Consortium for Advanced Simulation of LWRs programs VUQ Focus Area.
Luo, Jian; Tomar, Vikas; Zhou, Naixie; Lee, Hongsuk
2013-06-30
Based on a recent discovery of premelting-like grain boundary segregation in refractory metals occurring at high temperatures and/or high alloying levels, this project investigated grain boundary segregation and embrittlement in tungsten (W) based alloys. Specifically, new interfacial thermodynamic models have been developed and quantified to predict high-temperature grain boundary segregation in the W-Ni binary alloy and W-Ni-Fe, W-Ni-Ti, W-Ni-Co, W-Ni-Cr, W-Ni-Zr and W-Ni-Nb ternary alloys. The thermodynamic modeling results have been experimentally validated for selected systems. Furthermore, multiscale modeling has been conducted at continuum, atomistic and quantum-mechanical levels to link grain boundary segregation with embrittlement. In summary, this 3-year project has successfully developed a theoretical framework in combination with a multiscale modeling strategy for predicting grain boundary segregation and embrittlement in W based alloys.
Discharge Performance of Li-O_{2} Batteries Using a Multiscale Modeling Approach
Bao, Jie; Xu, Wu; Bhattacharya, Priyanka; Stewart, Mark L.; Zhang, Jiguang; Pan, Wenxiao
2015-06-10
To study the discharge performance of Li–O_{2} batteries, we propose a multiscale modeling framework that links models in an upscaling fashion from the nanoscale to mesoscale and finally to the device scale. We have effectively reconstructed the microstructure of a Li–O_{2} air electrode in silico, conserving the porosity, surface-to-volume ratio, and pore size distribution of the real air electrode structure. The mechanism of rate-dependent morphology of Li_{2}O_{2} growth is incorporated into the mesoscale model. The correlation between the active-surface-to-volume ratio and averaged Li_{2}O_{2} concentration is derived to link different scales. The proposed approach’s accuracy is first demonstrated by comparing the predicted discharge curves of Li–O_{2} batteries with experimental results at the high current density. Next, the validated modeling approach effectively captures the significant improvement in discharge capacity due to the formation of Li_{2}O_{2} particles. Finally, it predicts the discharge capacities of Li–O_{2} batteries with different air electrode microstructure designs and operating conditions.
Investigating ice nucleation in cirrus clouds with an aerosol-enabled Multiscale Modeling Framework
Zhang, Chengzhu; Wang, Minghuai; Morrison, H.; Somerville, Richard C.; Zhang, Kai; Liu, Xiaohong; Li, J-L F.
2014-11-06
In this study, an aerosol-dependent ice nucleation scheme [Liu and Penner, 2005] has been implemented in an aerosol-enabled multi-scale modeling framework (PNNL MMF) to study ice formation in upper troposphere cirrus clouds through both homogeneous and heterogeneous nucleation. The MMF model represents cloud scale processes by embedding a cloud-resolving model (CRM) within each vertical column of a GCM grid. By explicitly linking ice nucleation to aerosol number concentration, CRM-scale temperature, relative humidity and vertical velocity, the new MMF model simulates the persistent high ice supersaturation and low ice number concentration (10 to 100/L) at cirrus temperatures. The low ice number is attributed to the dominance of heterogeneous nucleation in ice formation. The new model simulates the observed shift of the ice supersaturation PDF towards higher values at low temperatures following homogeneous nucleation threshold. The MMF models predict a higher frequency of midlatitude supersaturation in the Southern hemisphere and winter hemisphere, which is consistent with previous satellite and in-situ observations. It is shown that compared to a conventional GCM, the MMF is a more powerful model to emulate parameters that evolve over short time scales such as supersaturation. Sensitivity tests suggest that the simulated global distribution of ice clouds is sensitive to the ice nucleation schemes and the distribution of sulfate and dust aerosols. Simulations are also performed to test empirical parameters related to auto-conversion of ice crystals to snow. Results show that with a value of 250 ?m for the critical diameter, Dcs, that distinguishes ice crystals from snow, the model can produce good agreement to the satellite retrieved products in terms of cloud ice water path and ice water content, while the total ice water is not sensitive to the specification of Dcs value.
Investigating ice nucleation in cirrus clouds with an aerosol-enabled Multiscale Modeling Framework
Zhang, Chengzhu; Wang, Minghuai; Morrison, H.; Somerville, Richard C.; Zhang, Kai; Liu, Xiaohong; Li, J-L F.
2014-12-01
In this study, an aerosol-dependent ice nucleation scheme [Liu and Penner, 2005] has been implemented in an aerosol-enabled multi-scale modeling framework (PNNL MMF) to study ice formation in upper troposphere cirrus clouds through both homogeneous and heterogeneous nucleation. The MMF model represents cloud scale processes by embedding a cloud-resolving model (CRM) within each vertical column of a GCM grid. By explicitly linking ice nucleation to aerosol number concentration, CRM-scale temperature, relative humidity and vertical velocity, the new MMF model simulates the persistent high ice supersaturation and low ice number concentration (10 to 100/L) at cirrus temperatures. The low ice number is attributed to the dominance of heterogeneous nucleation in ice formation. The new model simulates the observed shift of the ice supersaturation PDF towards higher values at low temperatures following homogeneous nucleation threshold. The MMF models predict a higher frequency of midlatitude supersaturation in the Southern hemisphere and winter hemisphere, which is consistent with previous satellite and in-situ observations. It is shown that compared to a conventional GCM, the MMF is a more powerful model to emulate parameters that evolve over short time scales such as supersaturation. Sensitivity tests suggest that the simulated global distribution of ice clouds is sensitive to the ice nucleation schemes and the distribution of sulfate and dust aerosols. Simulations are also performed to test empirical parameters related to auto-conversion of ice crystals to snow. Results show that with a value of 250 ?m for the critical diameter, Dcs, that distinguishes ice crystals from snow, the model can produce good agreement to the satellite retrieved products in terms of cloud ice water path and ice water content, while the total ice water is not sensitive to the specification of Dcs value.
Investigating ice nucleation in cirrus clouds with an aerosol-enabled Multiscale Modeling Framework
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Zhang, Chengzhu; Wang, Minghuai; Morrison, H.; Somerville, Richard C.; Zhang, Kai; Liu, Xiaohong; Li, J-L F.
2014-11-06
In this study, an aerosol-dependent ice nucleation scheme [Liu and Penner, 2005] has been implemented in an aerosol-enabled multi-scale modeling framework (PNNL MMF) to study ice formation in upper troposphere cirrus clouds through both homogeneous and heterogeneous nucleation. The MMF model represents cloud scale processes by embedding a cloud-resolving model (CRM) within each vertical column of a GCM grid. By explicitly linking ice nucleation to aerosol number concentration, CRM-scale temperature, relative humidity and vertical velocity, the new MMF model simulates the persistent high ice supersaturation and low ice number concentration (10 to 100/L) at cirrus temperatures. The low ice numbermore » is attributed to the dominance of heterogeneous nucleation in ice formation. The new model simulates the observed shift of the ice supersaturation PDF towards higher values at low temperatures following homogeneous nucleation threshold. The MMF models predict a higher frequency of midlatitude supersaturation in the Southern hemisphere and winter hemisphere, which is consistent with previous satellite and in-situ observations. It is shown that compared to a conventional GCM, the MMF is a more powerful model to emulate parameters that evolve over short time scales such as supersaturation. Sensitivity tests suggest that the simulated global distribution of ice clouds is sensitive to the ice nucleation schemes and the distribution of sulfate and dust aerosols. Simulations are also performed to test empirical parameters related to auto-conversion of ice crystals to snow. Results show that with a value of 250 μm for the critical diameter, Dcs, that distinguishes ice crystals from snow, the model can produce good agreement to the satellite retrieved products in terms of cloud ice water path and ice water content, while the total ice water is not sensitive to the specification of Dcs value.« less
Toward Multi-scale Modeling and simulation of conduction in heterogeneous materials.
Lechman, Jeremy B.; Battaile, Corbett Chandler.; Bolintineanu, Dan; Cooper, Marcia A.; Erikson, William W.; Foiles, Stephen M.; Kay, Jeffrey J; Phinney, Leslie M.; Piekos, Edward S.; Specht, Paul Elliott; Wixom, Ryan R.; Yarrington, Cole
2015-01-01
This report summarizes a project in which the authors sought to develop and deploy: (i) experimental techniques to elucidate the complex, multiscale nature of thermal transport in particle-based materials; and (ii) modeling approaches to address current challenges in predicting performace variability of materials (e.g., identifying and characterizing physical- chemical processes and their couplings across multiple length and time scales, modeling infor- mation transfer between scales, and statically and dynamically resolving material structure and its evolution during manufacturing and device performance). Experimentally, several capabilities were sucessfully advanced. As discussed in Chapter 2 a flash diffusivity capabil- ity for measuring homogeneous thermal conductivity of pyrotechnic powders (and beyond) was advanced; leading to enhanced characterization of pyrotechnic materials and properties impacting component development. Chapter 4 describes sucess for the first time, although preliminary, in resolving thermal fields at speeds and spatial scales relevant to energetic components. Chapter 7 summarizes the first ever (as far as the authors know) application of TDTR to actual pyrotechnic materials. This is the first attempt to actually characterize these materials at the interfacial scale. On the modeling side, new capabilities in image processing of experimental microstructures and direct numerical simulation on complicated structures were advanced (see Chapters 3 and 5). In addition, modeling work described in Chapter 8 led to improved prediction of interface thermal conductance from first principles calculations. Toward the second point, for a model system of packed particles, significant headway was made in implementing numerical algorithms and collecting data to justify the approach in terms of highlighting the phenomena at play and pointing the way forward in de- veloping and informing the kind of modeling approach oringinally envisioned (see Chapter 6). In both cases much more remains to be accomplished.
A Unified Multi-Scale Model for Pore-Scale Flow Simulations in Soils
Yang, Xiaofan; Liu, Chongxuan; Shang, Jianying; Fang, Yilin; Bailey, Vanessa L.
2014-01-30
Pore-scale simulations have received increasing interest in subsurface sciences to provide mechanistic insights into the macroscopic phenomena of water flow and reactive transport processes. The application of the pore scale simulations to soils and sediments is, however, challenged because of the characterization limitation that often only allows partial resolution of pore structure and geometry. A significant proportion of the pore space in soils and sediments is below the spatial resolution, forming a mixed media of pore and porous domains. Here we reported a unified multi-scale model (UMSM) that can be used to simulate water flow and transport in mixed media of pore and porous domains under both saturated and unsaturated conditions. The approach modifies the classic Navier-Stokes equation by adding a Darcy term to describe fluid momentum and uses a generalized mass balance equation for saturated and unsaturated conditions. By properly defining physical parameters, the UMSM can be applied in both pore and porous domains. This paper describes the set of equations for the UMSM, a series of validation cases under saturated or unsaturated conditions, and a real soil case for the application of the approach.
Huntzinger, D.N.; Schwalm, C.; Michalak, A.M; Schaefer, K.; King, A.W.; Wei, Y.; Jacobson, A.; Liu, S.; Cook, R.; Post, W.M.; Berthier, G.; Hayes, D.; Huang, M.; Ito, A.; Lei, H.; Lu, C.; Mao, J.; Peng, C.H.; Peng, S.; Poulter, B.; Riccuito, D.; Shi, X.; Tian, H.; Wang, W.; Zeng, N.; Zhao, F.; Zhu, Q.
2013-01-01
Terrestrial biosphere models (TBMs) have become an integral tool for extrapolating local observations and understanding of land-atmosphere carbon exchange to larger regions. The North American Carbon Program (NACP) Multi-scale synthesis and Terrestrial Model Intercomparison Project (MsTMIP) is a formal model intercomparison and evaluation effort focused on improving the diagnosis and attribution of carbon exchange at regional and global scales. MsTMIP builds upon current and past synthesis activities, and has a unique framework designed to isolate, interpret, and inform understanding of how model structural differences impact estimates of carbon uptake and release. Here we provide an overview of the MsTMIP effort and describe how the MsTMIP experimental design enables the assessment and quantification of TBM structural uncertainty. Model structure refers to the types of processes considered (e.g. nutrient cycling, disturbance, lateral transport of carbon), and how these processes are represented (e.g. photosynthetic formulation, temperature sensitivity, respiration) in the models. By prescribing a common experimental protocol with standard spin-up procedures and driver data sets, we isolate any biases and variability in TBM estimates of regional and global carbon budgets resulting from differences in the models themselves (i.e. model structure) and model-specific parameter values. An initial intercomparison of model structural differences is represented using hierarchical cluster diagrams (a.k.a. dendrograms), which highlight similarities and differences in how models account for carbon cycle, vegetation, energy, and nitrogen cycle dynamics. We show that, despite the standardized protocol used to derive initial conditions, models show a high degree of variation for GPP, total living biomass, and total soil carbon, underscoring the influence of differences in model structure and parameterization on model estimates.
SUSTAINABLE MANUFACTURING VIA MULTI-SCALE PHYSICS-BASED PROCESS...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
SUSTAINABLE MANUFACTURING VIA MULTI-SCALE PHYSICS-BASED PROCESS MODELING AND MANUFACTURING-INFORMED DESIGN Third Wave Systems, Inc. - Minneapolis, MN Micro-structural modeling ...
Multiscale modeling and characterization for performance and safety of lithium-ion batteries
Pannala, Sreekanth; Turner, John A.; Allu, Srikanth; Elwasif, Wael R.; Kalnaus, Sergiy; Simunovic, Srdjan; Kumar, Abhishek; Billings, Jay Jay; Wang, Hsin; Nanda, Jagjit
2015-08-19
Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. In this paper we describe a new, open source computational framework for Lithium-ion battery simulations that is designed to support a variety of model types and formulations. This framework has been used to create three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. The model development and validation are supported by experimental methods such as IR-imaging, X-ray tomography and micro-Raman mapping.
Multiscale modeling and characterization for performance and safety of lithium-ion batteries
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pannala, Sreekanth; Turner, John A.; Allu, Srikanth; Elwasif, Wael R.; Kalnaus, Sergiy; Simunovic, Srdjan; Kumar, Abhishek; Billings, Jay Jay; Wang, Hsin; Nanda, Jagjit
2015-08-19
Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. In this paper we describe a new, open source computational framework for Lithium-ion battery simulations that is designed to support a variety of model types and formulations. This framework has been used to create three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. The modelmore » development and validation are supported by experimental methods such as IR-imaging, X-ray tomography and micro-Raman mapping.« less
Welz, Oliver; Burke, Michael P.; Antonov, Ivan O.; Goldsmith, C. Franklin; Savee, John David; Osborn, David L.; Taatjes, Craig A.; Klippenstein, Stephen J.; Sheps, Leonid
2015-04-10
We studied low-temperature propane oxidation at P = 4 Torr and T = 530, 600, and 670 K by time-resolved multiplexed photoionization mass spectrometry (MPIMS), which probes the reactants, intermediates, and products with isomeric selectivity using tunable synchrotron vacuum UV ionizing radiation. The oxidation is initiated by pulsed laser photolysis of oxalyl chloride, (COCl)_{2}, at 248 nm, which rapidly generates a ~1:1 mixture of 1-propyl (n-propyl) and 2-propyl (i-propyl) radicals via the fast Cl + propane reaction. At all three temperatures, the major stable product species is propene, formed in the propyl + O_{2} reactions by direct HO_{2} elimination from both n- and i-propyl peroxy radicals. The experimentally derived propene yields relative to the initial concentration of Cl atoms are (20 4)% at 530 K, (55 11)% at 600 K, and (86 17)% at 670 K at a reaction time of 20 ms. The lower yield of propene at low temperature reflects substantial formation of propyl peroxy radicals, which do not completely decompose on the experimental time scale. In addition, C_{3}H_{6}O isomers methyloxirane, oxetane, acetone, and propanal are detected as minor products. Our measured yields of oxetane and methyloxirane, which are coproducts of OH radicals, suggest a revision of the OH formation pathways in models of low-temperature propane oxidation. The experimental results are modeled and interpreted using a multiscale informatics approach, presented in detail in a separate publication (Burke, M. P.; Goldsmith, C. F.; Klippenstein, S. J.; Welz, O.; Huang H.; Antonov I. O.; Savee J. D.; Osborn D. L.; Zdor, J.; Taatjes, C. A.; Sheps, L. Multiscale Informatics for Low-Temperature Propane Oxidation: Further Complexities in Studies of Complex Reactions. J. Phys. Chem A. 2015, DOI: 10.1021/acs.jpca.5b01003). Additionally, we found that the model predicts the time profiles and yields of the experimentally observed primary products well, and shows satisfactory agreement for products formed mostly via secondary radicalradical reactions.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Welz, Oliver; Burke, Michael P.; Antonov, Ivan O.; Goldsmith, C. Franklin; Savee, John David; Osborn, David L.; Taatjes, Craig A.; Klippenstein, Stephen J.; Sheps, Leonid
2015-04-10
We studied low-temperature propane oxidation at P = 4 Torr and T = 530, 600, and 670 K by time-resolved multiplexed photoionization mass spectrometry (MPIMS), which probes the reactants, intermediates, and products with isomeric selectivity using tunable synchrotron vacuum UV ionizing radiation. The oxidation is initiated by pulsed laser photolysis of oxalyl chloride, (COCl)2, at 248 nm, which rapidly generates a ~1:1 mixture of 1-propyl (n-propyl) and 2-propyl (i-propyl) radicals via the fast Cl + propane reaction. At all three temperatures, the major stable product species is propene, formed in the propyl + O2 reactions by direct HO2 elimination frommore » both n- and i-propyl peroxy radicals. The experimentally derived propene yields relative to the initial concentration of Cl atoms are (20 ± 4)% at 530 K, (55 ± 11)% at 600 K, and (86 ± 17)% at 670 K at a reaction time of 20 ms. The lower yield of propene at low temperature reflects substantial formation of propyl peroxy radicals, which do not completely decompose on the experimental time scale. In addition, C3H6O isomers methyloxirane, oxetane, acetone, and propanal are detected as minor products. Our measured yields of oxetane and methyloxirane, which are coproducts of OH radicals, suggest a revision of the OH formation pathways in models of low-temperature propane oxidation. The experimental results are modeled and interpreted using a multiscale informatics approach, presented in detail in a separate publication (Burke, M. P.; Goldsmith, C. F.; Klippenstein, S. J.; Welz, O.; Huang H.; Antonov I. O.; Savee J. D.; Osborn D. L.; Zádor, J.; Taatjes, C. A.; Sheps, L. Multiscale Informatics for Low-Temperature Propane Oxidation: Further Complexities in Studies of Complex Reactions. J. Phys. Chem A. 2015, DOI: 10.1021/acs.jpca.5b01003). Additionally, we found that the model predicts the time profiles and yields of the experimentally observed primary products well, and shows satisfactory agreement for products formed mostly via secondary radical–radical reactions.« less
Welz, Oliver; Burke, Michael P.; Antonov, Ivan O.; Goldsmith, C. Franklin; Savee, John David; Osborn, David L.; Taatjes, Craig A.; Klippenstein, Stephen J.; Sheps, Leonid
2015-04-10
We studied low-temperature propane oxidation at P = 4 Torr and T = 530, 600, and 670 K by time-resolved multiplexed photoionization mass spectrometry (MPIMS), which probes the reactants, intermediates, and products with isomeric selectivity using tunable synchrotron vacuum UV ionizing radiation. The oxidation is initiated by pulsed laser photolysis of oxalyl chloride, (COCl)_{2}, at 248 nm, which rapidly generates a ~1:1 mixture of 1-propyl (n-propyl) and 2-propyl (i-propyl) radicals via the fast Cl + propane reaction. At all three temperatures, the major stable product species is propene, formed in the propyl + O_{2} reactions by direct HO_{2} elimination from both n- and i-propyl peroxy radicals. The experimentally derived propene yields relative to the initial concentration of Cl atoms are (20 4)% at 530 K, (55 11)% at 600 K, and (86 17)% at 670 K at a reaction time of 20 ms. The lower yield of propene at low temperature reflects substantial formation of propyl peroxy radicals, which do not completely decompose on the experimental time scale. In addition, C_{3}H_{6}O isomers methyloxirane, oxetane, acetone, and propanal are detected as minor products. Our measured yields of oxetane and methyloxirane, which are coproducts of OH radicals, suggest a revision of the OH formation pathways in models of low-temperature propane oxidation. The experimental results are modeled and interpreted using a multiscale informatics approach, presented in detail in a separate publication (Burke, M. P.; Goldsmith, C. F.; Klippenstein, S. J.; Welz, O.; Huang H.; Antonov I. O.; Savee J. D.; Osborn D. L.; Zdor, J.; Taatjes, C. A.; Sheps, L. Multiscale Informatics for Low-Temperature Propane Oxidation: Further Complexities in Studies of Complex Reactions. J. Phys. Chem A. 2015, DOI: 10.1021/acs.jpca.5b01003). Additionally, we found that the model predicts the time profiles and yields of the experimentally observed primary products well, and shows satisfactory agreement for products formed mostly via secondary radicalradical reactions.
Wang, Minghuai; Larson, Vincent E.; Ghan, Steven J.; Ovchinnikov, Mikhail; Schanen, D.; Xiao, Heng; Liu, Xiaohong; Rasch, Philip J.; Guo, Zhun
2015-06-01
In this study, a higher-order turbulence closure scheme, called Cloud Layers Unified by Binormals (CLUBB), is implemented into a Multi-scale Modeling Framework (MMF) model to improve low cloud simulations. The performance of CLUBB in MMF simulations with two different microphysics configurations (one-moment cloud microphysics without aerosol treatment and two-moment cloud microphysics coupled with aerosol treatment) is evaluated against observations and further compared with results from the Community Atmosphere Model, Version 5 (CAM5) with conventional cloud parameterizations. CLUBB is found to improve low cloud simulations in the MMF, and the improvement is particularly evident in the stratocumulus-to-cumulus transition regions. Compared to the single-moment cloud microphysics, CLUBB with two-moment microphysics produces clouds that are closer to the coast, and agrees better with observations. In the stratocumulus-to cumulus transition regions, CLUBB with two-moment cloud microphysics produces shortwave cloud forcing in better agreement with observations, while CLUBB with single moment cloud microphysics overestimates shortwave cloud forcing. CLUBB is further found to produce quantitatively similar improvements in the MMF and CAM5, with slightly better performance in the MMF simulations (e.g., MMF with CLUBB generally produces low clouds that are closer to the coast than CAM5 with CLUBB). Improved low cloud simulations in MMF make it an even more attractive tool for studying aerosol-cloud-precipitation interactions.
A multiscale MDCT image-based breathing lung model with time-varying regional ventilation
Yin, Youbing, E-mail: youbing-yin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States) [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Choi, Jiwoong, E-mail: jiwoong-choi@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States) [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Hoffman, Eric A., E-mail: eric-hoffman@uiowa.edu [Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Internal Medicine, The University of Iowa, Iowa City, IA 52242 (United States); Tawhai, Merryn H., E-mail: m.tawhai@auckland.ac.nz [Auckland Bioengineering Institute, The University of Auckland, Auckland (New Zealand); Lin, Ching-Long, E-mail: ching-long-lin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States) [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States)
2013-07-01
A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C{sub 1} continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung.
Multiscale Simulation Framework for Coupled Fluid Flow and Mechanical Deformation
Tchelepi, Hamdi
2014-11-14
A multiscale linear-solver framework for the pressure equation associated with flow in highly heterogeneous porous formations was developed. The multiscale based approach is cast in a general algebraic form, which facilitates integration of the new scalable linear solver in existing flow simulators. The Algebraic Multiscale Solver (AMS) is employed as a preconditioner within a multi-stage strategy. The formulations investigated include the standard MultiScale Finite-Element (MSFE) andMultiScale Finite-Volume (MSFV) methods. The local-stage solvers include incomplete factorization and the so-called Correction Functions (CF) associated with the MSFV approach. Extensive testing of AMS, as an iterative linear solver, indicate excellent convergence rates and computational scalability. AMS compares favorably with advanced Algebraic MultiGrid (AMG) solvers for highly detailed three-dimensional heterogeneous models. Moreover, AMS is expected to be especially beneficial in solving time-dependent problems of coupled multiphase flow and transport in large-scale subsurface formations.
Grell, Georg; Fast, Jerome D.; Gustafson, William I.; Peckham, Steven E.; McKeen, Stuart A.; Salzmann, Marc; Freitas, Saulo
2010-01-01
This is a conference proceeding that is now being put together as a book. This is chapter 2 of the book: "INTEGRATED SYSTEMS OF MESO-METEOROLOGICAL AND CHEMICAL TRANSPORT MODELS" published by Springer. The chapter title is "On-line Chemistry within WRF: Description and Evaluation of a State-of-the-Art Multiscale Air Quality and Weather Prediction Model." The original conference was the COST-728/NetFAM workshop on Integrated systems of meso-meteorological and chemical transport models, Danish Meteorological Institute, Copenhagen, May 21-23, 2007.
Freed, Alan D.; Einstein, Daniel R.; Carson, James P.; Jacob, Rick E.
2012-03-01
In the first year of this contractual effort a hypo-elastic constitutive model was developed and shown to have great potential in modeling the elastic response of parenchyma. This model resides at the macroscopic level of the continuum. In this, the second year of our support, an isotropic dodecahedron is employed as an alveolar model. This is a microscopic model for parenchyma. A hopeful outcome is that the linkage between these two scales of modeling will be a source of insight and inspiration that will aid us in the final year's activity: creating a viscoelastic model for parenchyma.
Behafarid, F.; Shaver, D. R.; Bolotnov, I. A.; Jansen, K. E.; Antal, S. P.; Podowski, M. Z.
2012-07-01
The required technological and safety standards for future Gen IV Reactors can only be achieved if advanced simulation capabilities become available, which combine high performance computing with the necessary level of modeling detail and high accuracy of predictions. The purpose of this paper is to present new results of multi-scale three-dimensional (3D) simulations of the inter-related phenomena, which occur as a result of fuel element heat-up and cladding failure, including the injection of a jet of gaseous fission products into a partially blocked Sodium Fast Reactor (SFR) coolant channel, and gas/molten sodium transport along the coolant channels. The computational approach to the analysis of the overall accident scenario is based on using two different inter-communicating computational multiphase fluid dynamics (CMFD) codes: a CFD code, PHASTA, and a RANS code, NPHASE-CMFD. Using the geometry and time history of cladding failure and the gas injection rate, direct numerical simulations (DNS), combined with the Level Set method, of two-phase turbulent flow have been performed by the PHASTA code. The model allows one to track the evolution of gas/liquid interfaces at a centimeter scale. The simulated phenomena include the formation and breakup of the jet of fission products injected into the liquid sodium coolant. The PHASTA outflow has been averaged over time to obtain mean phasic velocities and volumetric concentrations, as well as the liquid turbulent kinetic energy and turbulence dissipation rate, all of which have served as the input to the core-scale simulations using the NPHASE-CMFD code. A sliding window time averaging has been used to capture mean flow parameters for transient cases. The results presented in the paper include testing and validation of the proposed models, as well the predictions of fission-gas/liquid-sodium transport along a multi-rod fuel assembly of SFR during a partial loss-of-flow accident. (authors)
Modeling, Analysis and Simulation of Multiscale Preferential Flow - 8/05-8/10 - Final Report
Ralph Showalter; Malgorzata Peszynska
2012-07-03
The research agenda of this project are: (1) Modeling of preferential transport from mesoscale to macroscale; (2) Modeling of fast flow in narrow fractures in porous media; (3) Pseudo-parabolic Models of Dynamic Capillary Pressure; (4) Adaptive computational upscaling of flow with inertia from porescale to mesoscale; (5) Adaptive modeling of nonlinear coupled systems; and (6) Adaptive modeling and a-posteriori estimators for coupled systems with heterogeneous data.
Multi-scale Modelling of bcc-Fe Based Alloys for Nuclear Applications
Malerba, Lorenzo
2008-07-01
Understanding the basic mechanisms that determine microstructure changes in neutron irradiated steels is vital for a safe lifetime management of existing nuclear reactors and a safe design of future nuclear options. Low-alloyed ferritic steels containing Cu, Ni, Mn and Si as principal solute atoms are used as structural materials for current reactor vessels. The microstructural evolution under irradiation in alloys is decided by the interplay between defect formation and thermodynamic driving forces, together determining the appearance of phase transformations (precipitation, segregation,...) and favouring or delaying the nucleation and growth of point-defect clusters, their diffusion and their mutual recombination or removal at sinks. A reliable description of the production, evolution and accumulation of radiation damage must therefore start from the atomic level and requires being able to describe multicomponent systems for timescales ranging from few picoseconds to years. This goal demands firstly the fabrication of interatomic potentials for alloys that must be both consistent with the thermodynamic properties of the system and capable of reproducing correctly the characteristic solute-point defect interactions, versus ab initio or experimental data. Secondly the performance of extensive molecular dynamics (MD) simulations, to grasp the main mechanisms of defect production, diffusion, mutual interaction, and interaction with solute atoms and impurities. Thirdly, the development of simulation tools capable of describing the microstructure evolution beyond the time-frame and length-scale of MD, while reproducing as much as possible the atomic-level origin of the mechanisms governing the evolution of the system, including phase changes. In this presentation the results of recent efforts made in this direction in the case of Fe-Cu, Fe-Cr and Fe-Ni alloys, as basic model alloys for the description of steels of technological relevance, are highlighted. In particular, advanced techniques to fit interatomic potentials consistent with thermodynamics are proposed and the results of their application to the mentioned alloys are presented. Next, the development of advanced methods, based on the use of artificial intelligence, to improve both the physical reliability and the computational efficiency of kinetic Monte Carlo codes for the study of point-defect clustering and phase changes beyond the scale of MD, is reported. These recent progresses bear the promise of being able, in the near future, of producing reliable tools for the description of the microstructure evolution of realistic model alloys under irradiation. (author)
Evaluation of the Multi-scale Modeling Framework Using Data from...
Office of Scientific and Technical Information (OSTI)
Unfortunately, the traditional parametric approach of diagnosing cloud and radiation properties for gridcells that are tens to hundreds kilometers across from large-scale model ...
Costigan, Keeley Rochelle; Sauer, Jeremy A.; Dubey, Manvendra Krishna
2015-07-10
This report discusses the ghgas IC project which when applied, allows for an evaluation of LANL's HIGRAD model which can be used to create atmospheric simulations.
Watt-Sun: A Multi-Scale, Multi-Model, Machine-Learning Solar...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
forecasting technology will be developed that leverages big data processing, deep machine learning, and cloud modeling integrated in a universal platform with an open architecture. ...
The Radiative Properties of Small Clouds: Multi-Scale Observations and Modeling
Feingold, Graham; McComiskey, Allison
2013-09-25
Warm, liquid clouds and their representation in climate models continue to represent one of the most significant unknowns in climate sensitivity and climate change. Our project combines ARM observations, LES modeling, and satellite imagery to characterize shallow clouds and the role of aerosol in modifying their radiative effects.
Multiscale Universal Interface: A concurrent framework for coupling heterogeneous solvers
Tang, Yu-Hang; Kudo, Shuhei; Bian, Xin; Li, Zhen; Karniadakis, George Em
2015-09-15
Graphical abstract: - Abstract: Concurrently coupled numerical simulations using heterogeneous solvers are powerful tools for modeling multiscale phenomena. However, major modifications to existing codes are often required to enable such simulations, posing significant difficulties in practice. In this paper we present a C++ library, i.e. the Multiscale Universal Interface (MUI), which is capable of facilitating the coupling effort for a wide range of multiscale simulations. The library adopts a header-only form with minimal external dependency and hence can be easily dropped into existing codes. A data sampler concept is introduced, combined with a hybrid dynamic/static typing mechanism, to create an easily customizable framework for solver-independent data interpretation. The library integrates MPI MPMD support and an asynchronous communication protocol to handle inter-solver information exchange irrespective of the solvers' own MPI awareness. Template metaprogramming is heavily employed to simultaneously improve runtime performance and code flexibility. We validated the library by solving three different multiscale problems, which also serve to demonstrate the flexibility of the framework in handling heterogeneous models and solvers. In the first example, a Couette flow was simulated using two concurrently coupled Smoothed Particle Hydrodynamics (SPH) simulations of different spatial resolutions. In the second example, we coupled the deterministic SPH method with the stochastic Dissipative Particle Dynamics (DPD) method to study the effect of surface grafting on the hydrodynamics properties on the surface. In the third example, we consider conjugate heat transfer between a solid domain and a fluid domain by coupling the particle-based energy-conserving DPD (eDPD) method with the Finite Element Method (FEM)
Watt-Sun: A Multi-Scale, Multi-Model, Machine-Learning Solar Forecasting Technology
Broader source: Energy.gov [DOE]
As part of this project, new solar forecasting technology will be developed that leverages big data processing, deep machine learning, and cloud modeling integrated in a universal platform with an...
Broader source: Energy.gov [DOE]
Micro-structural modeling tools for metals are being developed and used to demonstrate a design framework to improve the understanding of dynamic response and statistical variability. This project will enable design engineers to evaluate the effects of design changes and material selection; anticipate quality and cost prior to implementation on the factory floor; and enable low-waste, low-cost manufacturing. Third Wave Systems, Inc. - Minneapolis, MN
Understanding Creep Mechanisms in Graphite with Experiments, Multiscale Simulations, and Modeling
Eapen, Jacob; Murty, Korukonda; Burchell, Timothy
2014-06-02
Disordering mechanisms in graphite have a long history with conflicting viewpoints. Using Raman and x-ray photon spectroscopy, electron microscopy, x-ray diffraction experiments and atomistic modeling and simulations, the current project has developed a fundamental understanding of early-to-late state radiation damage mechanisms in nuclear reactor grade graphite (NBG-18 and PCEA). We show that the topological defects in graphite play an important role under neutron and ion irradiation.
Multiscale modeling of thermal conductivity of high burnup structures in UO2 fuels
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bai, Xian -Ming; Tonks, Michael R.; Zhang, Yongfeng; Hales, Jason D.
2015-12-22
The high burnup structure forming at the rim region in UO2 based nuclear fuel pellets has interesting physical properties such as improved thermal conductivity, even though it contains a high density of grain boundaries and micron-size gas bubbles. To understand this counterintuitive phenomenon, mesoscale heat conduction simulations with inputs from atomistic simulations and experiments were conducted to study the thermal conductivities of a small-grain high burnup microstructure and two large-grain unrestructured microstructures. We concluded that the phonon scattering effects caused by small point defects such as dispersed Xe atoms in the grain interior must be included in order to correctlymore » predict the thermal transport properties of these microstructures. In extreme cases, even a small concentration of dispersed Xe atoms such as 10-5 can result in a lower thermal conductivity in the large-grain unrestructured microstructures than in the small-grain high burnup structure. The high-density grain boundaries in a high burnup structure act as defect sinks and can reduce the concentration of point defects in its grain interior and improve its thermal conductivity in comparison with its large-grain counterparts. Furthermore, an analytical model was developed to describe the thermal conductivity at different concentrations of dispersed Xe, bubble porosities, and grain sizes. Upon calibration, the model is robust and agrees well with independent heat conduction modeling over a wide range of microstructural parameters.« less
Swift, D. C.; Paisley, Dennis L.; Kyrala, George A.; Hauer, Allan
2002-01-01
Ab initio quantum mechanics was used to construct a thermodynamically complete and rigorous equation of state for beryllium in the hexagonal and body-centred cubic structures, and to predict elastic constants as a function of compression. The equation of state agreed well with Hugoniot data and previously-published equations of state, but the temperatures were significantly different. The hexagonal/bcc phase boundary agreed reasonably well with published data, suggesting that the temperatures in our new equation of state were accurate. Shock waves were induced in single crystals and polycrystalline foils of beryllium, by direct illumination using the TRIDENT laser at Los Alamos. The velocity history at the surface of the sample was measured using a line-imaging VISAR, and transient X-ray diffraction (TXD) records were obtained with a plasma backlighter and X-ray streak cameras. The VISAR records exhibited elastic precursors, plastic waves, phase changes and spall. Dual TXD records were taken, in Bragg and Laue orientations. The Bragg lines moved in response to compression in the uniaxial direction. Because direct laser drive was used, the results had to be interpreted with the aid of radiation hydrodynamics simulations to predict the loading history for each laser pulse. In the experiments where there was evidence of polymorphism in the VISAR record, additional lines appeared in the Bragg and Laue records. The corresponding pressures were consistent with the phase boundary predicted by the quantum mechanical equation of state for beryllium. A model of the response of a single crystal of beryllium to shock loading is being developed using these new theoretical and experimental results. This model will be used in meso-scale studies of the response of the microstructure, allowing us to develop a more accurate representation of the behaviour of polycrystalline beryllium.
Uncertainty quantification and multiscale mathematics. (Conference...
Office of Scientific and Technical Information (OSTI)
quantification and multiscale mathematics. Citation Details In-Document Search Title: Uncertainty quantification and multiscale mathematics. Authors: Trucano, Timothy Guy ...
Uncertainty quantification and multiscale mathematics. (Conference...
Office of Scientific and Technical Information (OSTI)
Uncertainty quantification and multiscale mathematics. Citation Details In-Document Search Title: Uncertainty quantification and multiscale mathematics. No abstract prepared. ...
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical engineering applications.
Plimpton, Steven J.; Sershen, Cheryl L.; May, Elebeoba E.
2015-01-01
This paper describes a method for incorporating a diffusion field modeling oxygen usage and dispersion in a multi-scale model of Mycobacterium tuberculosis (Mtb) infection mediated granuloma formation. We implemented this method over a floating-point field to model oxygen dynamics in host tissue during chronic phase response and Mtb persistence. The method avoids the requirement of satisfying the Courant-Friedrichs-Lewy (CFL) condition, which is necessary in implementing the explicit version of the finite-difference method, but imposes an impractical bound on the time step. Instead, diffusion is modeled by a matrix-based, steady state approximate solution to the diffusion equation. Moreover, presented in figure 1 is the evolution of the diffusion profiles of a containment granuloma over time.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Plimpton, Steven J.; Sershen, Cheryl L.; May, Elebeoba E.
2015-01-01
This paper describes a method for incorporating a diffusion field modeling oxygen usage and dispersion in a multi-scale model of Mycobacterium tuberculosis (Mtb) infection mediated granuloma formation. We implemented this method over a floating-point field to model oxygen dynamics in host tissue during chronic phase response and Mtb persistence. The method avoids the requirement of satisfying the Courant-Friedrichs-Lewy (CFL) condition, which is necessary in implementing the explicit version of the finite-difference method, but imposes an impractical bound on the time step. Instead, diffusion is modeled by a matrix-based, steady state approximate solution to the diffusion equation. Moreover, presented in figuremore » 1 is the evolution of the diffusion profiles of a containment granuloma over time.« less
A Hybrid Multiscale Framework for Subsurface Flow and Transport Simulations
Scheibe, Timothy D.; Yang, Xiaofan; Chen, Xingyuan; Hammond, Glenn E.
2015-06-01
Extensive research efforts have been invested in reducing model errors to improve the predictive ability of biogeochemical earth and environmental system simulators, with applications ranging from contaminant transport and remediation to impacts of biogeochemical elemental cycling (e.g., carbon and nitrogen) on local ecosystems and regional to global climate. While the bulk of this research has focused on improving model parameterizations in the face of observational limitations, the more challenging type of model error/uncertainty to identify and quantify is model structural error which arises from incorrect mathematical representations of (or failure to consider) important physical, chemical, or biological processes, properties, or system states in model formulations. While improved process understanding can be achieved through scientific study, such understanding is usually developed at small scales. Process-based numerical models are typically designed for a particular characteristic length and time scale. For application-relevant scales, it is generally necessary to introduce approximations and empirical parameterizations to describe complex systems or processes. This single-scale approach has been the best available to date because of limited understanding of process coupling combined with practical limitations on system characterization and computation. While computational power is increasing significantly and our understanding of biological and environmental processes at fundamental scales is accelerating, using this information to advance our knowledge of the larger system behavior requires the development of multiscale simulators. Accordingly there has been much recent interest in novel multiscale methods in which microscale and macroscale models are explicitly coupled in a single hybrid multiscale simulation. A limited number of hybrid multiscale simulations have been developed for biogeochemical earth systems, but they mostly utilize application-specific and sometimes ad-hoc approaches for model coupling. We are developing a generalized approach to hierarchical model coupling designed for high-performance computational systems, based on the Swift computing workflow framework. In this presentation we will describe the generalized approach and provide two use cases: 1) simulation of a mixing-controlled biogeochemical reaction coupling pore- and continuum-scale models, and 2) simulation of biogeochemical impacts of groundwater – river water interactions coupling fine- and coarse-grid model representations. This generalized framework can be customized for use with any pair of linked models (microscale and macroscale) with minimal intrusiveness to the at-scale simulators. It combines a set of python scripts with the Swift workflow environment to execute a complex multiscale simulation utilizing an approach similar to the well-known Heterogeneous Multiscale Method. User customization is facilitated through user-provided input and output file templates and processing function scripts, and execution within a high-performance computing environment is handled by Swift, such that minimal to no user modification of at-scale codes is required.
A Hybrid Multiscale Framework for Subsurface Flow and Transport Simulations
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Scheibe, Timothy D.; Yang, Xiaofan; Chen, Xingyuan; Hammond, Glenn E.
2015-06-01
Extensive research efforts have been invested in reducing model errors to improve the predictive ability of biogeochemical earth and environmental system simulators, with applications ranging from contaminant transport and remediation to impacts of biogeochemical elemental cycling (e.g., carbon and nitrogen) on local ecosystems and regional to global climate. While the bulk of this research has focused on improving model parameterizations in the face of observational limitations, the more challenging type of model error/uncertainty to identify and quantify is model structural error which arises from incorrect mathematical representations of (or failure to consider) important physical, chemical, or biological processes, properties, ormore » system states in model formulations. While improved process understanding can be achieved through scientific study, such understanding is usually developed at small scales. Process-based numerical models are typically designed for a particular characteristic length and time scale. For application-relevant scales, it is generally necessary to introduce approximations and empirical parameterizations to describe complex systems or processes. This single-scale approach has been the best available to date because of limited understanding of process coupling combined with practical limitations on system characterization and computation. While computational power is increasing significantly and our understanding of biological and environmental processes at fundamental scales is accelerating, using this information to advance our knowledge of the larger system behavior requires the development of multiscale simulators. Accordingly there has been much recent interest in novel multiscale methods in which microscale and macroscale models are explicitly coupled in a single hybrid multiscale simulation. A limited number of hybrid multiscale simulations have been developed for biogeochemical earth systems, but they mostly utilize application-specific and sometimes ad-hoc approaches for model coupling. We are developing a generalized approach to hierarchical model coupling designed for high-performance computational systems, based on the Swift computing workflow framework. In this presentation we will describe the generalized approach and provide two use cases: 1) simulation of a mixing-controlled biogeochemical reaction coupling pore- and continuum-scale models, and 2) simulation of biogeochemical impacts of groundwater – river water interactions coupling fine- and coarse-grid model representations. This generalized framework can be customized for use with any pair of linked models (microscale and macroscale) with minimal intrusiveness to the at-scale simulators. It combines a set of python scripts with the Swift workflow environment to execute a complex multiscale simulation utilizing an approach similar to the well-known Heterogeneous Multiscale Method. User customization is facilitated through user-provided input and output file templates and processing function scripts, and execution within a high-performance computing environment is handled by Swift, such that minimal to no user modification of at-scale codes is required.« less
A Hybrid Multiscale Framework for Subsurface Flow and Transport Simulations
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Scheibe, Timothy D.; Yang, Xiaofan; Chen, Xingyuan; Hammond, Glenn E.
2015-06-01
Extensive research efforts have been invested in reducing model errors to improve the predictive ability of biogeochemical earth and environmental system simulators, with applications ranging from contaminant transport and remediation to impacts of biogeochemical elemental cycling (e.g., carbon and nitrogen) on local ecosystems and regional to global climate. While the bulk of this research has focused on improving model parameterizations in the face of observational limitations, the more challenging type of model error/uncertainty to identify and quantify is model structural error which arises from incorrect mathematical representations of (or failure to consider) important physical, chemical, or biological processes, properties, ormoresystem states in model formulations. While improved process understanding can be achieved through scientific study, such understanding is usually developed at small scales. Process-based numerical models are typically designed for a particular characteristic length and time scale. For application-relevant scales, it is generally necessary to introduce approximations and empirical parameterizations to describe complex systems or processes. This single-scale approach has been the best available to date because of limited understanding of process coupling combined with practical limitations on system characterization and computation. While computational power is increasing significantly and our understanding of biological and environmental processes at fundamental scales is accelerating, using this information to advance our knowledge of the larger system behavior requires the development of multiscale simulators. Accordingly there has been much recent interest in novel multiscale methods in which microscale and macroscale models are explicitly coupled in a single hybrid multiscale simulation. A limited number of hybrid multiscale simulations have been developed for biogeochemical earth systems, but they mostly utilize application-specific and sometimes ad-hoc approaches for model coupling. We are developing a generalized approach to hierarchical model coupling designed for high-performance computational systems, based on the Swift computing workflow framework. In this presentation we will describe the generalized approach and provide two use cases: 1) simulation of a mixing-controlled biogeochemical reaction coupling pore- and continuum-scale models, and 2) simulation of biogeochemical impacts of groundwater river water interactions coupling fine- and coarse-grid model representations. This generalized framework can be customized for use with any pair of linked models (microscale and macroscale) with minimal intrusiveness to the at-scale simulators. It combines a set of python scripts with the Swift workflow environment to execute a complex multiscale simulation utilizing an approach similar to the well-known Heterogeneous Multiscale Method. User customization is facilitated through user-provided input and output file templates and processing function scripts, and execution within a high-performance computing environment is handled by Swift, such that minimal to no user modification of at-scale codes is required.less
Weston, David; Hanson, Paul J; Norby, Richard J; Tuskan, Gerald A; Wullschleger, Stan D
2012-01-01
Network analysis is now a common statistical tool for molecular biologists. Network algorithms are readily used to model gene, protein and metabolic correlations providing insight into pathways driving biological phenomenon. One output from such an analysis is a candidate gene list that can be responsible, in part, for the biological process of interest. The question remains, however, as to whether molecular network analysis can be used to inform process models at higher levels of biological organization. In our previous work, transcriptional networks derived from three plant species were constructed, interrogated for orthology and then correlated to photosynthetic inhibition at elevated temperature. One unique aspect of that study was the link from co-expression networks to net photosynthesis. In this addendum, we propose a conceptual model where traditional network analysis can be linked to whole-plant models thereby informing predictions on key processes such as photosynthesis, nutrient uptake and assimilation, and C partitioning.
Niyogi, Devdutta S.
2013-06-07
The CLASIC experiment was conducted over the US southern great plains (SGP) in June 2007 with an objective to lead an enhanced understanding of the cumulus convection particularly as it relates to land surface conditions. This project was design to help assist with understanding the overall improvement of land atmosphere convection initiation representation of which is important for global and regional models. The study helped address one of the critical documented deficiency in the models central to the ARM objectives for cumulus convection initiation and particularly under summer time conditions. This project was guided by the scientific question building on the CLASIC theme questions: What is the effect of improved land surface representation on the ability of coupled models to simulate cumulus and convection initiation? The focus was on the US Southern Great Plains region. Since the CLASIC period was anomalously wet the strategy has been to use other periods and domains to develop the comparative assessment for the CLASIC data period, and to understand the mechanisms of the anomalous wet conditions on the tropical systems and convection over land. The data periods include the IHOP 2002 field experiment that was over roughly same domain as the CLASIC in the SGP, and some of the DOE funded Ameriflux datasets.
Graph modeling systems and methods
Neergaard, Mike
2015-10-13
An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.
Yu, Sungduk; Pritchard, Michael S.
2015-12-17
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m^{2}) and longwave cloud forcing (~5 W/m^{2}) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Yu, Sungduk; Pritchard, Michael S.
2015-12-17
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m2) and longwave cloud forcing (~5 W/m2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfullymore » satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less
Multi-Scale Simulations Solve a Plasma Turbulence Mystery
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Multi-Scale Simulations Solve a Plasma Turbulence Mystery Multi-Scale Simulations Solve a Plasma Turbulence Mystery Coupled Model Reproduces Experimental Electron Heat Losses March 7, 2016 Contact: Kathy Kincade, kkincade@lbl.gov, +1 510 495 2124 turb cross High-res image of the inside of the Alcator C-Mod tokamak, with a representative cross-section of a plasma. The inset shows the approximate domain for one of the multi-scale simulations and a graphic of the plasma turbulence in the
Multiscale Mathematics For Plasma Kinetics Spanning Multiple...
Office of Scientific and Technical Information (OSTI)
Technical Report: Multiscale Mathematics For Plasma Kinetics Spanning Multiple Collisionality Regimes Citation Details In-Document Search Title: Multiscale Mathematics For Plasma ...
Andrade, JosÃÂ© E; Rudnicki, John W
2012-12-14
In this project, a predictive multiscale framework will be developed to simulate the strong coupling between solid deformations and fluid diffusion in porous rocks. We intend to improve macroscale modeling by incorporating fundamental physical modeling at the microscale in a computationally efficient way. This is an essential step toward further developments in multiphysics modeling, linking hydraulic, thermal, chemical, and geomechanical processes. This research will focus on areas where severe deformations are observed, such as deformation bands, where classical phenomenology breaks down. Multiscale geometric complexities and key geomechanical and hydraulic attributes of deformation bands (e.g., grain sliding and crushing, and pore collapse, causing interstitial fluid expulsion under saturated conditions), can significantly affect the constitutive response of the skeleton and the intrinsic permeability. Discrete mechanics (DEM) and the lattice Boltzmann method (LBM) will be used to probe the microstructure---under the current state---to extract the evolution of macroscopic constitutive parameters and the permeability tensor. These evolving macroscopic constitutive parameters are then directly used in continuum scale predictions using the finite element method (FEM) accounting for the coupled solid deformation and fluid diffusion. A particularly valuable aspect of this research is the thorough quantitative verification and validation program at different scales. The multiscale homogenization framework will be validated using X-ray computed tomography and 3D digital image correlation in situ at the Advanced Photon Source in Argonne National Laboratories. Also, the hierarchical computations at the specimen level will be validated using the aforementioned techniques in samples of sandstone undergoing deformation bands.
Costigan, Keeley Rochelle; Dubey, Manvendra Krishna
2015-07-10
Atmospheric models are compared in collaboration with LANL and the University of Michigan to understand emissions and the condition of the atmosphere from a model perspective.
MULTISCALE MATHEMATICS FOR BIOMASS CONVERSION TO RENEWABLE HYDROGEN
Vlachos, Dionisios; Plechac, Petr; Katsoulakis, Markos
2013-09-05
The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomass transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.
Kim, G. H.; Smith, K.
2009-05-01
Addresses battery requirements for electric vehicles using a model that evaluates physical-chemical processes in lithium-ion batteries, from atomic variations to vehicle interface controls.
Multi-Scale Multi-Dimensional Li-Ion Battery Model for Better Design and Management (Presentation)
Kim, G.-H.; Smith, K.
2008-10-01
The developed model used is to provide a better understanding and help answer engineering questions about improving the design, operational strategy, management, and safety of cells.
Multilingual interfaces for parallel coupling in multiphysics and multiscale systems.
Ong, E. T.; Larson, J. W.; Norris, B.; Jacob, R. L.; Tobis, M.; Steder, M.; Mathematics and Computer Science; Univ. of Wisconsin; Australian National Univ.; Univ. of Chicago
2007-01-01
Multiphysics and multiscale simulation systems are emerging as a new grand challenge in computational science, largely because of increased computing power provided by the distributed-memory parallel programming model on commodity clusters. These systems often present a parallel coupling problem in their intercomponent data exchanges. Another potential problem in these coupled systems is language interoperability between their various constituent codes. In anticipation of combined parallel coupling/language interoperability challenges, we have created a set of interlanguage bindings for a successful parallel coupling library, the Model Coupling Toolkit. We describe the method used for automatically generating the bindings using the Babel language interoperability tool, and illustrate with short examples how MCT can be used from the C++ and Python languages. We report preliminary performance reports for the MCT interpolation benchmark. We conclude with a discussion of the significance of this work to the rapid prototyping of large parallel coupled systems.
Freed, Alan D.; Einstein, Daniel R.
2011-04-14
An isotropic constitutive model for the parenchyma of lung has been derived from the theory of hypo-elasticity. The intent is to use it to represent the mechanical response of this soft tissue in sophisticated, computational, fluid-dynamic models of the lung. This demands that the continuum model be accurate, yet simple and effcient. An objective algorithm for its numeric integration is provided. The response of the model is determined for several boundary-value problems whose experiments are used for material characterization. The effective elastic, bulk, and shear moduli, and Poissons ratio, as tangent functions, are also derived. The model is characterized against published experimental data for lung. A bridge between this continuum model and a dodecahedral model of alveolar geometry is investigated, with preliminary findings being reported.
Method and apparatus for modeling interactions
Xavier, Patrick G.
2002-01-01
The present invention provides a method and apparatus for modeling interactions that overcomes drawbacks. The method of the present invention comprises representing two bodies undergoing translations by two swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention is more robust and allows faster modeling than previous methods.
Zhang, Xuesong; Sahajpal, Ritvik; Manowitz, D.; Zhao, Kaiguang; LeDuc, Stephen D.; Xu, Min; Xiong, Wei; Zhang, Aiping; Izaurralde, Roberto C.; Thomson, Allison M.; West, Tristram O.; Post, W. M.
2014-05-01
The development of effective measures to stabilize atmospheric CO2 concentration and mitigate negative impacts of climate change requires accurate quantification of the spatial variation and magnitude of the terrestrial carbon (C) flux. However, the spatial pattern and strength of terrestrial C sinks and sources remain uncertain. In this study, we designed a spatially-explicit agroecosystem modeling system by integrating the Environmental Policy Integrated Climate (EPIC) model with multiple sources of geospatial and surveyed datasets (including crop type map, elevation, climate forcing, fertilizer application, tillage type and distribution, and crop planting and harvesting date), and applied it to examine the sensitivity of cropland C flux simulations to two widely used soil databases (i.e. State Soil Geographic-STATSGO of a scale of 1:250,000 and Soil Survey Geographic-SSURGO of a scale of 1:24,000) in Iowa, USA. To efficiently execute numerous EPIC runs resulting from the use of high resolution spatial data (56m), we developed a parallelized version of EPIC. Both STATSGO and SSURGO led to similar simulations of crop yields and Net Ecosystem Production (NEP) estimates at the State level. However, substantial differences were observed at the county and sub-county (grid) levels. In general, the fine resolution SSURGO data outperformed the coarse resolution STATSGO data for county-scale crop-yield simulation, and within STATSGO, the area-weighted approach provided more accurate results. Further analysis showed that spatial distribution and magnitude of simulated NEP were more sensitive to the resolution difference between SSURGO and STATSGO at the county or grid scale. For over 60% of the cropland areas in Iowa, the deviations between STATSGO- and SSURGO-derived NEP were larger than 1MgCha(-1)yr(-1), or about half of the average cropland NEP, highlighting the significant uncertainty in spatial distribution and magnitude of simulated C fluxes resulting from differences in soil data resolution.
A Many-Task Parallel Approach for Multiscale Simulations of Subsurface Flow and Reactive Transport
Scheibe, Timothy D.; Yang, Xiaofan; Schuchardt, Karen L.; Agarwal, Khushbu; Chase, Jared M.; Palmer, Bruce J.; Tartakovsky, Alexandre M.
2014-12-16
Continuum-scale models have long been used to study subsurface flow, transport, and reactions but lack the ability to resolve processes that are governed by pore-scale mixing. Recently, pore-scale models, which explicitly resolve individual pores and soil grains, have been developed to more accurately model pore-scale phenomena, particularly reaction processes that are controlled by local mixing. However, pore-scale models are prohibitively expensive for modeling application-scale domains. This motivates the use of a hybrid multiscale approach in which continuum- and pore-scale codes are coupled either hierarchically or concurrently within an overall simulation domain (time and space). This approach is naturally suited to an adaptive, loosely-coupled many-task methodology with three potential levels of concurrency. Each individual code (pore- and continuum-scale) can be implemented in parallel; multiple semi-independent instances of the pore-scale code are required at each time step providing a second level of concurrency; and Monte Carlo simulations of the overall system to represent uncertainty in material property distributions provide a third level of concurrency. We have developed a hybrid multiscale model of a mixing-controlled reaction in a porous medium wherein the reaction occurs only over a limited portion of the domain. Loose, minimally-invasive coupling of pre-existing parallel continuum- and pore-scale codes has been accomplished by an adaptive script-based workflow implemented in the Swift workflow system. We describe here the methods used to create the model system, adaptively control multiple coupled instances of pore- and continuum-scale simulations, and maximize the scalability of the overall system. We present results of numerical experiments conducted on NERSC supercomputing systems; our results demonstrate that loose many-task coupling provides a scalable solution for multiscale subsurface simulations with minimal overhead.
Hybrid multiscale simulation of a mixing-controlled reaction
Scheibe, Timothy D.; Schuchardt, Karen L.; Agarwal, Khushbu; Chase, Jared M.; Yang, Xiaofan; Palmer, Bruce J.; Tartakovsky, Alexandre M.; Elsethagen, Todd O.; Redden, George D.
2015-09-01
Continuum-scale models have been used to study subsurface flow, transport, and reactions for many years but lack the capability to resolve fine-grained processes. Recently, pore-scale models, which operate at scales of individual soil grains, have been developed to more accurately model and study pore-scale phenomena, such as mineral precipitation and dissolution reactions, microbially-mediated surface reactions, and other complex processes. However, these highly-resolved models are prohibitively expensive for modeling domains of sizes relevant to practical problems. To broaden the utility of pore-scale models for larger domains, we developed a hybrid multiscale model that initially simulates the full domain at the continuum scale and applies a pore-scale model only to areas of high reactivity. Since the location and number of pore-scale model regions in the model varies as the reactions proceed, an adaptive script defines the number and location of pore regions within each continuum iteration and initializes pore-scale simulations from macroscale information. Another script communicates information from the pore-scale simulation results back to the continuum scale. These components provide loose coupling between the pore- and continuum-scale codes into a single hybrid multiscale model implemented within the SWIFT workflow environment. In this paper, we consider an irreversible homogenous bimolecular reaction (two solutes reacting to form a third solute) in a 2D test problem. This paper is focused on the approach used for multiscale coupling between pore- and continuum-scale models, application to a realistic test problem, and implications of the results for predictive simulation of mixing-controlled reactions in porous media. Our results and analysis demonstrate that loose coupling provides a feasible, efficient and scalable approach for multiscale subsurface simulations.
A New Computational Paradigm in Multiscale Simulations: Application...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
New Computational Paradigm in Multiscale Simulations: Application to Brain Blood Flow ... We present the computational advances that have enabled the first multiscale simulation on ...
Multiscale Simulations of Human Pathologies | Argonne Leadership...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
apex. Inset shows the time evolution of thrombus formation. George Karniadakis, Brown University Multiscale Simulations of Human Pathologies PI Name: George Karniadakis PI...
"Multiscale Capabilities for Exploring Transport Phenomena in...
Office of Scientific and Technical Information (OSTI)
in Batteries": Ab Initio Calculations on Defective LiFePO4 Citation Details In-Document Search Title: "Multiscale Capabilities for Exploring Transport Phenomena in Batteries": Ab ...
Computational Physics and Methods
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces
Multiscale characterization and analysis of shapes
Prasad, Lakshman; Rao, Ramana
2002-01-01
An adaptive multiscale method approximates shapes with continuous or uniformly and densely sampled contours, with the purpose of sparsely and nonuniformly discretizing the boundaries of shapes at any prescribed resolution, while at the same time retaining the salient shape features at that resolution. In another aspect, a fundamental geometric filtering scheme using the Constrained Delaunay Triangulation (CDT) of polygonized shapes creates an efficient parsing of shapes into components that have semantic significance dependent only on the shapes' structure and not on their representations per se. A shape skeletonization process generalizes to sparsely discretized shapes, with the additional benefit of prunability to filter out irrelevant and morphologically insignificant features. The skeletal representation of characters of varying thickness and the elimination of insignificant and noisy spurs and branches from the skeleton greatly increases the robustness, reliability and recognition rates of character recognition algorithms.
Davtyan, Aram; Dama, James F.; Voth, Gregory A.; Andersen, Hans C.
2015-04-21
Coarse-grained (CG) models of molecular systems, with fewer mechanical degrees of freedom than an all-atom model, are used extensively in chemical physics. It is generally accepted that a coarse-grained model that accurately describes equilibrium structural properties (as a result of having a well constructed CG potential energy function) does not necessarily exhibit appropriate dynamical behavior when simulated using conservative Hamiltonian dynamics for the CG degrees of freedom on the CG potential energy surface. Attempts to develop accurate CG dynamic models usually focus on replacing Hamiltonian motion by stochastic but Markovian dynamics on that surface, such as Langevin or Brownian dynamics. However, depending on the nature of the system and the extent of the coarse-graining, a Markovian dynamics for the CG degrees of freedom may not be appropriate. In this paper, we consider the problem of constructing dynamic CG models within the context of the Multi-Scale Coarse-graining (MS-CG) method of Voth and coworkers. We propose a method of converting a MS-CG model into a dynamic CG model by adding degrees of freedom to it in the form of a small number of fictitious particles that interact with the CG degrees of freedom in simple ways and that are subject to Langevin forces. The dynamic models are members of a class of nonlinear systems interacting with special heat baths that were studied by Zwanzig [J. Stat. Phys. 9, 215 (1973)]. The properties of the fictitious particles can be inferred from analysis of the dynamics of all-atom simulations of the system of interest. This is analogous to the fact that the MS-CG method generates the CG potential from analysis of equilibrium structures observed in all-atom simulation data. The dynamic models generate a non-Markovian dynamics for the CG degrees of freedom, but they can be easily simulated using standard molecular dynamics programs. We present tests of this method on a series of simple examples that demonstrate that the method provides realistic dynamical CG models that have non-Markovian or close to Markovian behavior that is consistent with the actual dynamical behavior of the all-atom system used to construct the CG model. Both the construction and the simulation of such a dynamic CG model have computational requirements that are similar to those of the corresponding MS-CG model and are good candidates for CG modeling of very large systems.
The Adaptive Multi-scale Simulation Infrastructure
Tobin, William R.
2015-09-01
The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.
Multiscale MonteCarlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
MULTISCALE DYNAMICS OF SOLAR MAGNETIC STRUCTURES
Uritsky, Vadim M.; Davila, Joseph M.
2012-03-20
Multiscale topological complexity of the solar magnetic field is among the primary factors controlling energy release in the corona, including associated processes in the photospheric and chromospheric boundaries. We present a new approach for analyzing multiscale behavior of the photospheric magnetic flux underlying these dynamics as depicted by a sequence of high-resolution solar magnetograms. The approach involves two basic processing steps: (1) identification of timing and location of magnetic flux origin and demise events (as defined by DeForest et al.) by tracking spatiotemporal evolution of unipolar and bipolar photospheric regions, and (2) analysis of collective behavior of the detected magnetic events using a generalized version of the Grassberger-Procaccia correlation integral algorithm. The scale-free nature of the developed algorithms makes it possible to characterize the dynamics of the photospheric network across a wide range of distances and relaxation times. Three types of photospheric conditions are considered to test the method: a quiet photosphere, a solar active region (NOAA 10365) in a quiescent non-flaring state, and the same active region during a period of M-class flares. The results obtained show (1) the presence of a topologically complex asymmetrically fragmented magnetic network in the quiet photosphere driven by meso- and supergranulation, (2) the formation of non-potential magnetic structures with complex polarity separation lines inside the active region, and (3) statistical signatures of canceling bipolar magnetic structures coinciding with flaring activity in the active region. Each of these effects can represent an unstable magnetic configuration acting as an energy source for coronal dissipation and heating.
Energy Science and Technology Software Center (OSTI)
2009-08-01
The code to be released is a new addition to the LAMMPS molecular dynamics code. LAMMPS is developed and maintained by Sandia, is publicly available, and is used widely by both natioanl laboratories and academics. The new addition to be released enables LAMMPS to perform molecular dynamics simulations of shock waves using the Multi-scale Shock Simulation Technique (MSST) which we have developed and has been previously published. This technique enables molecular dynamics simulations of shockmore » waves in materials for orders of magnitude longer timescales than the direct, commonly employed approach.« less
Method and apparatus for modeling interactions
Xavier, Patrick G.
2000-08-08
A method and apparatus for modeling interactions between bodies. The method comprises representing two bodies undergoing translations and rotations by two hierarchical swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention can serve as a practical tool in motion planning, CAD systems, simulation systems, safety analysis, and applications that require modeling time-based interactions. A body can be represented in the present invention by a union of convex polygons and convex polyhedra. As used generally herein, polyhedron includes polygon, and polyhedra includes polygons. The body undergoing translation can be represented by a swept body representation, where the swept body representation comprises a hierarchical bounding volume representation whose leaves each contain a representation of the region swept by a section of the body during the translation, and where the union of the regions is a superset of the region swept by the surface of the body during translation. Interactions between two bodies thus represented can be modeled by modeling interactions between the convex hulls of the finite sets of discrete points in the swept body representations.
Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"
Petzold, Linda R.
2012-10-25
Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-11-18
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo
2014-04-15
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Method of and apparatus for modeling interactions
Budge, Kent G.
2004-01-13
A method and apparatus for modeling interactions can accurately model tribological and other properties and accommodate topological disruptions. Two portions of a problem space are represented, a first with a Lagrangian mesh and a second with an ALE mesh. The ALE and Lagrangian meshes are constructed so that each node on the surface of the Lagrangian mesh is in a known correspondence with adjacent nodes in the ALE mesh. The interaction can be predicted for a time interval. Material flow within the ALE mesh can accurately model complex interactions such as bifurcation. After prediction, nodes in the ALE mesh in correspondence with nodes on the surface of the Lagrangian mesh can be mapped so that they are once again adjacent to their corresponding Lagrangian mesh nodes. The ALE mesh can then be smoothed to reduce mesh distortion that might reduce the accuracy or efficiency of subsequent prediction steps. The process, from prediction through mapping and smoothing, can be repeated until a terminal condition is reached.
Yang, Judith C.
2015-01-09
The purpose of this grant is to develop the multi-scale theoretical methods to describe the nanoscale oxidation of metal thin films, as the PI (Yang) extensive previous experience in the experimental elucidation of the initial stages of Cu oxidation by primarily in situ transmission electron microscopy methods. Through the use and development of computational tools at varying length (and time) scales, from atomistic quantum mechanical calculation, force field mesoscale simulations, to large scale Kinetic Monte Carlo (KMC) modeling, the fundamental underpinings of the initial stages of Cu oxidation have been elucidated. The development of computational modeling tools allows for accelerated materials discovery. The theoretical tools developed from this program impact a wide range of technologies that depend on surface reactions, including corrosion, catalysis, and nanomaterials fabrication.
Tome, Carlos N; Caro, J A; Lebensohn, R A; Unal, Cetin; Arsenlis, A; Marian, J; Pasamehmetoglu, K
2010-01-01
Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating the phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.
Vikas Tomer; John Renaud
2010-08-31
It is estimated that by using better and improved high temperature structural materials, the power generation efficiency of the power plants can be increased by 15% resulting in significant cost savings. One such promising material system for future high-temperature structural applications in power plants is Silicon Carbide-Silicon Nitride (SiC-Si{sub 3}N{sub 4}) nanoceramic matrix composites. The described research work focuses on multiscale simulation-based design of these SiC-Si{sub 3}N{sub 4} nanoceramic matrix composites. There were two primary objectives of the research: (1) Development of a multiscale simulation tool and corresponding multiscale analyses of the high-temperature creep and fracture resistance properties of the SiC-Si{sub 3}N{sub 4} nanocomposites at nano-, meso- and continuum length- and timescales; and (2) Development of a simulation-based robust design optimization methodology for application to the multiscale simulations to predict the range of the most suitable phase morphologies for the desired high-temperature properties of the SiC-Si{sub 3}N{sub 4} nanocomposites. The multiscale simulation tool is based on a combination of molecular dynamics (MD), cohesive finite element method (CFEM), and continuum level modeling for characterizing time-dependent material deformation behavior. The material simulation tool is incorporated in a variable fidelity model management based design optimization framework. Material modeling includes development of an experimental verification framework. Using material models based on multiscaling, it was found using molecular simulations that clustering of the SiC particles near Si{sub 3}N{sub 4} grain boundaries leads to significant nanocomposite strengthening and significant rise in fracture resistance. It was found that a control of grain boundary thicknesses by dispersing non-stoichiometric carbide or nitride phases can lead to reduction in strength however significant rise in fracture strength. The temperature dependent strength and microstructural stability was also significantly depended upon the dispersion of new phases at grain boundaries. The material design framework incorporates high temperature creep and mechanical strength data in order to develop a collaborative multiscale framework of morphology optimization. The work also incorporates a computer aided material design dataset development procedure where a systematic dataset on material properties and morphology correlation could be obtained depending upon a material processing scientist's requirements. Two different aspects covered under this requirement are: (1) performing morphology related analyses at the nanoscale and at the microscale to develop a multiscale material design and analyses capability; (2) linking material behavior analyses with the developed design tool to form a set of material design problems that illustrate the range of material design dataset development that could be performed. Overall, a software based methodology to design microstructure of particle based ceramic nanocomposites has been developed. This methodology has been shown to predict changes in phase morphologies required for achieving optimal balance of conflicting properties such as minimal creep strain rate and high fracture strength at high temperatures. The methodology incorporates complex material models including atomistic approaches. The methodology will be useful to design materials for high temperature applications including those of interest to DoE while significantly reducing cost of expensive experiments.
Liu, Dajiang [Ames Laboratory; Evans, James W. [Ames Laboratory
2013-12-01
A realistic molecular-level description of catalytic reactions on single-crystal metal surfaces can be provided by stochastic multisite lattice-gas (msLG) models. This approach has general applicability, although in this report, we will focus on the example of CO-oxidation on the unreconstructed fcc metal (100) or M(100) surfaces of common catalyst metals M = Pd, Rh, Pt and Ir (i.e., avoiding regimes where Pt and Ir reconstruct). These models can capture the thermodynamics and kinetics of adsorbed layers for the individual reactants species, such as CO/M(100) and O/M(100), as well as the interaction and reaction between different reactant species in mixed adlayers, such as (CO + O)/M(100). The msLG models allow population of any of hollow, bridge, and top sites. This enables a more flexible and realistic description of adsorption and adlayer ordering, as well as of reaction configurations and configuration-dependent barriers. Adspecies adsorption and interaction energies, as well as barriers for various processes, constitute key model input. The choice of these energies is guided by experimental observations, as well as by extensive Density Functional Theory analysis. Model behavior is assessed via Kinetic Monte Carlo (KMC) simulation. We also address the simulation challenges and theoretical ramifications associated with very rapid diffusion and local equilibration of reactant adspecies such as CO. These msLG models are applied to describe adsorption, ordering, and temperature programmed desorption (TPD) for individual CO/M(100) and O/M(100) reactant adlayers. In addition, they are also applied to predict mixed (CO + O)/M(100) adlayer structure on the nanoscale, the complete bifurcation diagram for reactive steady-states under continuous flow conditions, temperature programmed reaction (TPR) spectra, and titration reactions for the CO-oxidation reaction. Extensive and reasonably successful comparison of model predictions is made with experimental data. Furthermore, we discuss the possible transition from traditional mean-field-type bistability and reaction kinetics for lower-pressure to multistability and enhanced fluctuation effects for moderate- or higher-pressure. Behavior in the latter regime reflects a stronger influence of adspecies interactions and also lower diffusivity in the higher-coverage mixed adlayer. We also analyze mesoscale spatiotemporal behavior including the propagation of reaction diffusion fronts between bistable reactive and inactive states, and associated nucleation-mediated transitions between these states. This behavior is controlled by complex surface mass transport processes, specifically chemical diffusion in mixed reactant adlayers for which we provide a precise theoretical formulation. The msLG models together with an appropriate treatment of chemical diffusivity enable equation-free heterogeneous coupled lattice-gas (HCLG) simulations of spatiotemporal behavior. In addition, msLG + HCLG modeling can describe coverage variations across polycrystalline catalysts surfaces, pressure variations across catalyst surfaces in microreactors, and could be incorporated into a multiphysics framework to describe mass and heat transfer limitations for high-pressure catalysis. (C) 2013 Elsevier Ltd. All rights reserved.
Adomian Decomposition Method for Quark Gluon Plasma Model
Constantinescu, Radu; Ionescu, Carmen; Stoicescu, Mihai
2011-10-03
The paper investigates the possibility of obtaining analytical solutions for the Quark Gluon Plasma model using the Adomian decomposition method.
Collaboratory for Multiscale Chemical Science (CMCS)
Allison, Thomas C
2012-07-03
This document provides details of the contributions made by NIST to the Collaboratory for Multiscale Chemical Science (CMCS) project. In particular, efforts related to the provision of data (and software in support of that data) relevant to the combustion pilot project are described.
Geoelectrical Measurement of Multi-Scale Mass Transfer Parameters
Day-Lewis, Frederick; Singha, Kamini; Haggerty, Roy; Johnson, Tim; Binley, Andrew; Lane, John
2014-01-16
Mass transfer affects contaminant transport and is thought to control the efficiency of aquifer remediation at a number of sites within the Department of Energy (DOE) complex. An improved understanding of mass transfer is critical to meeting the enormous scientific and engineering challenges currently facing DOE. Informed design of site remedies and long-term stewardship of radionuclide-contaminated sites will require new cost-effective laboratory and field techniques to measure the parameters controlling mass transfer spatially and across a range of scales. In this project, we sought to capitalize on the geophysical signatures of mass transfer. Previous numerical modeling and pilot-scale field experiments suggested that mass transfer produces a geoelectrical signature—a hysteretic relation between sampled (mobile-domain) fluid conductivity and bulk (mobile + immobile) conductivity—over a range of scales relevant to aquifer remediation. In this work, we investigated the geoelectrical signature of mass transfer during tracer transport in a series of controlled experiments to determine the operation of controlling parameters, and also investigated the use of complex-resistivity (CR) as a means of quantifying mass transfer parameters in situ without tracer experiments. In an add-on component to our grant, we additionally considered nuclear magnetic resonance (NMR) to help parse mobile from immobile porosities. Including the NMR component, our revised study objectives were to: 1. Develop and demonstrate geophysical approaches to measure mass-transfer parameters spatially and over a range of scales, including the combination of electrical resistivity monitoring, tracer tests, complex resistivity, nuclear magnetic resonance, and materials characterization; and 2. Provide mass-transfer estimates for improved understanding of contaminant fate and transport at DOE sites, such as uranium transport at the Hanford 300 Area. To achieve our objectives, we implemented a 3-part research plan involving (1) development of computer codes and techniques to estimate mass-transfer parameters from time-lapse electrical data; (2) bench-scale experiments on synthetic materials and materials from cores from the Hanford 300 Area; and (3) field demonstration experiments at the DOE’s Hanford 300 Area. In a synergistic add-on to our workplan, we analyzed data from field experiments performed at the DOE Naturita Site under a separate DOE SBR grant, on which PI Day-Lewis served as co-PI. Techniques developed for application to Hanford datasets also were applied to data from Naturita. 1. Introduction The Department of Energy (DOE) faces enormous scientific and engineering challenges associated with the remediation of legacy contamination at former nuclear weapons production facilities. Selection, design and optimization of appropriate site remedies (e.g., pump-and-treat, biostimulation, or monitored natural attenuation) requires reliable predictive models of radionuclide fate and transport; however, our current modeling capabilities are limited by an incomplete understanding of multi-scale mass transfer—its rates, scales, and the heterogeneity of controlling parameters. At many DOE sites, long “tailing” behavior, concentration rebound, and slower-than-expected cleanup are observed; these observations are all consistent with multi-scale mass transfer [Haggerty and Gorelick, 1995; Haggerty et al., 2000; 2004], which renders pump-and-treat remediation and biotransformation inefficient and slow [Haggerty and Gorelick, 1994; Harvey et al., 1994; Wilson, 1997]. Despite the importance of mass transfer, there are significant uncertainties associated with controlling parameters, and the prevalence of mass transfer remains a point of debate [e.g., Hill et al., 2006; Molz et al., 2006] for lack of experimental methods to verify and measure it in situ or independently of tracer breakthrough. There is a critical need for new field-experimental techniques to measure mass transfer in-situ and estimate multi-scale and spatially variable mass-transfer parameters. The current lack of such techniques results in large parameter uncertainty, which in turn translates into enormous prediction uncertainty and cost to DOE. In this project, we considered three hydrogeophysical approaches for providing information about mass-transfer parameters: (1) the combination of electrical-resistivity tomography (ERT) and ionic tracer experiments to explore rates of exchange and relative mobile and immobile porosities; (2) complex resistivity (CR) measurements to infer the distribution of diffusive length scales active in a porous medium; and (3) nuclear magnetic resonance (NMR) to estimate mobile and immobile porosity.
Weather Research and Forecasting Model with the Immersed Boundary Method
Energy Science and Technology Software Center (OSTI)
2012-05-01
The Weather Research and Forecasting (WRF) Model with the immersed boundary method is an extension of the open-source WRF Model available for wwww.wrf-model.org. The new code modifies the gridding procedure and boundary conditions in the WRF model to improve WRF's ability to simutate the atmosphere in environments with steep terrain and additionally at high-resolutions.
Systems, Methods and Computer Readable Media for Modeling Cell...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Systems, Methods and Computer Readable Media for Modeling Cell Performance Fade, Kinetic ... CellSage also supports battery system development by characterizing cell and string ...
University of Maryland component of the Center for Multiscale Plasma
Office of Scientific and Technical Information (OSTI)
Dynamics: Final Technical Report (Technical Report) | SciTech Connect SciTech Connect Search Results Technical Report: University of Maryland component of the Center for Multiscale Plasma Dynamics: Final Technical Report Citation Details In-Document Search Title: University of Maryland component of the Center for Multiscale Plasma Dynamics: Final Technical Report The Center for Multiscale Plasma Dynamics (CMPD) was a five-year Fusion Science Center. The University of Maryland (UMD) and UCLA
Theory & Modeling | Argonne National Laboratory
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
interactions with nanostructures Methods and software development, including multiscale approaches to assembly Group Lead Stephen Gray People Maria K. Y. Chan Larry Curtiss...
An adaptive wavelet stochastic collocation method for irregular...
Office of Scientific and Technical Information (OSTI)
adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. ...
Systems and methods for modeling and analyzing networks
Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W
2013-10-29
The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.
Multiscale Computation. Needs and Opportunities for BER Science
Scheibe, Timothy D.; Smith, Jeremy C.
2015-01-01
The Environmental Molecular Sciences Laboratory (EMSL), a scientific user facility managed by Pacific Northwest National Laboratory for the U.S. Department of Energy, Office of Biological and Environmental Research (BER), conducted a one-day workshop on August 26, 2014 on the topic of “Multiscale Computation: Needs and Opportunities for BER Science.” Twenty invited participants, from various computational disciplines within the BER program research areas, were charged with the following objectives; Identify BER-relevant models and their potential cross-scale linkages that could be exploited to better connect molecular-scale research to BER research at larger scales and; Identify critical science directions that will motivate EMSL decisions regarding future computational (hardware and software) architectures.
Self-Consistent Multiscale Theory of Internal Wave, Mean-Flow Interactions
Holm, D.D.; Aceves, A.; Allen, J.S.; Alber, M.; Camassa, R.; Cendra, H.; Chen, S.; Duan, J.; Fabijonas, B.; Foias, C.; Fringer, O.; Gent, P.R.; Jordan, R.; Kouranbaeva, S.; Kovacic, G.; Levermore, C.D.; Lythe, G.; Lifschitz, A.; Marsden, J.E.; Margolin, L.; Newberger, P.; Olson, E.; Ratiu, T.; Shkoller, S.; Timofeyev, I.; Titi, E.S.; Wynn, S.
1999-06-03
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The research reported here produced new effective ways to solve multiscale problems in nonlinear fluid dynamics, such as turbulent flow and global ocean circulation. This was accomplished by first developing new methods for averaging over random or rapidly varying phases in nonlinear systems at multiple scales. We then used these methods to derive new equations for analyzing the mean behavior of fluctuation processes coupled self consistently to nonlinear fluid dynamics. This project extends a technology base relevant to a variety of multiscale problems in fluid dynamics of interest to the Laboratory and applies this technology to those problems. The project's theoretical and mathematical developments also help advance our understanding of the scientific principles underlying the control of complex behavior in fluid dynamical systems with strong spatial and temporal internal variability.
Byrne, Jason P.; Morgan, Huw; Habbal, Shadia R.; Gallagher, Peter T.
2012-06-20
Studying coronal mass ejections (CMEs) in coronagraph data can be challenging due to their diffuse structure and transient nature, and user-specific biases may be introduced through visual inspection of the images. The large amount of data available from the Solar and Heliospheric Observatory (SOHO), Solar TErrestrial RElations Observatory (STEREO), and future coronagraph missions also makes manual cataloging of CMEs tedious, and so a robust method of detection and analysis is required. This has led to the development of automated CME detection and cataloging packages such as CACTus, SEEDS, and ARTEMIS. Here, we present the development of a new CORIMP (coronal image processing) CME detection and tracking technique that overcomes many of the drawbacks of current catalogs. It works by first employing the dynamic CME separation technique outlined in a companion paper, and then characterizing CME structure via a multiscale edge-detection algorithm. The detections are chained through time to determine the CME kinematics and morphological changes as it propagates across the plane of sky. The effectiveness of the method is demonstrated by its application to a selection of SOHO/LASCO and STEREO/SECCHI images, as well as to synthetic coronagraph images created from a model corona with a variety of CMEs. The algorithms described in this article are being applied to the whole LASCO and SECCHI data sets, and a catalog of results will soon be available to the public.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Localized Scale Coupling and New Educational Paradigms in Multiscale Mathematics and Science
LEAL, L. GARY
2013-06-30
One of the most challenging multi-scale simulation problems in the area of multi-phase materials is to develop effective computational techniques for the prediction of coalescence and related phenomena involving rupture of a thin liquid film due to the onset of instability driven by van der Waals or other micro-scale attractive forces. Accurate modeling of this process is critical to prediction of the outcome of milling processes for immiscible polymer blends, one of the most important routes to new advanced polymeric materials. In typical situations, the blend evolves into an ?emulsion? of dispersed phase drops in a continuous matrix fluid. Coalescence is then a critical factor in determining the size distribution of the dispersed phase, but is extremely difficult to predict from first principles. The thin film separating two drops may only achieve rupture at dimensions of approximately 10 nm while the drop sizes are 0(10 ?m). It is essential to achieve very accurate solutions for the flow and for the interface shape at both the macroscale of the full drops, and within the thin film (where the destabilizing disjoining pressure due to van der Waals forces is proportional approximately to the inverse third power of the local film thickness, h-3). Furthermore, the fluids of interest are polymeric (through Newtonian) and the classical continuum description begins to fail as the film thins ? requiring incorporation of molecular effects, such as a hybrid code that incorporates a version of coarse grain molecular dynamics within the thin film coupled with a classical continuum description elsewhere in the flow domain. Finally, the presence of surface active additions, either surfactants (in the form of di-block copolymers) or surface-functionalized micro- or nano-scale particles, adds an additional level of complexity, requiring development of a distinct numerical method to predict the nonuniform concentration gradients of these additives that are responsible for Marangoni stresses at the interface. Again, the physical dimensions of these additives may become comparable to the thin film dimensions, requiring an additional layer of multi-scale modeling.
Martin Karplus and Computer Modeling for Chemical Systems
Office of Scientific and Technical Information (OSTI)
Martin Karplus and Computer Modeling for Chemical Systems Resources with Additional ... of multiscale models for complex chemical systems."1 Karplus "has been using ...
Probability of detection models for eddy current NDE methods
Rajesh, S.N.
1993-04-30
The development of probability of detection (POD) models for a variety of nondestructive evaluation (NDE) methods is motivated by a desire to quantify the variability introduced during the process of testing. Sources of variability involved in eddy current methods of NDE include those caused by variations in liftoff, material properties, probe canting angle, scan format, surface roughness and measurement noise. This thesis presents a comprehensive POD model for eddy current NDE. Eddy current methods of nondestructive testing are used widely in industry to inspect a variety of nonferromagnetic and ferromagnetic materials. The development of a comprehensive POD model is therefore of significant importance. The model incorporates several sources of variability characterized by a multivariate Gaussian distribution and employs finite element analysis to predict the signal distribution. The method of mixtures is then used for estimating optimal threshold values. The research demonstrates the use of a finite element model within a probabilistic framework to the spread in the measured signal for eddy current nondestructive methods. Using the signal distributions for various flaw sizes the POD curves for varying defect parameters have been computed. In contrast to experimental POD models, the cost of generating such curves is very low and complex defect shapes can be handled very easily. The results are also operator independent.
High-Order/Low-Order methods for ocean modeling
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Newman, Christopher; Womeldorff, Geoff; Chacón, Luis; Knoll, Dana A.
2015-06-01
We examine a High Order/Low Order (HOLO) approach for a z-level ocean model and show that the traditional semi-implicit and split-explicit methods, as well as a recent preconditioning strategy, can easily be cast in the framework of HOLO methods. The HOLO formulation admits an implicit-explicit method that is algorithmically scalable and second-order accurate, allowing timesteps much larger than the barotropic time scale. We demonstrate how HOLO approaches, in particular the implicit-explicit method, can provide a solid route for ocean simulation to heterogeneous computing and exascale environments.
Curve fitting methods for solar radiation data modeling
Karim, Samsul Ariffin Abdul E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder E-mail: balbir@petronas.com.my
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
A meshless method for modeling convective heat transfer
Carrington, David B
2010-01-01
A meshless method is used in a projection-based approach to solve the primitive equations for fluid flow with heat transfer. The method is easy to implement in a MATLAB format. Radial basis functions are used to solve two benchmark test cases: natural convection in a square enclosure and flow with forced convection over a backward facing step. The results are compared with two popular and widely used commercial codes: COMSOL, a finite element model, and FLUENT, a finite volume-based model.
Progress in fast, accurate multi-scale climate simulations
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Progress in Fast, Accurate Multi-scale Climate Simulations
Collins, William D; Johansen, Hans; Evans, Katherine J; Woodward, Carol S.; Caldwell, Peter
2015-01-01
We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
Arctic sea ice modeling with the material-point method.
Peterson, Kara J.; Bochev, Pavel Blagoveston
2010-04-01
Arctic sea ice plays an important role in global climate by reflecting solar radiation and insulating the ocean from the atmosphere. Due to feedback effects, the Arctic sea ice cover is changing rapidly. To accurately model this change, high-resolution calculations must incorporate: (1) annual cycle of growth and melt due to radiative forcing; (2) mechanical deformation due to surface winds, ocean currents and Coriolis forces; and (3) localized effects of leads and ridges. We have demonstrated a new mathematical algorithm for solving the sea ice governing equations using the material-point method with an elastic-decohesive constitutive model. An initial comparison with the LANL CICE code indicates that the ice edge is sharper using Materials-Point Method (MPM), but that many of the overall features are similar.
Bayesian methods for characterizing unknown parameters of material models
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
2016-02-04
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
A New Method of Comparing Forcing Agents in Climate Models
Kravitz, Benjamin S.; MacMartin, Douglas; Rasch, Philip J.; Jarvis, Andrew
2015-10-14
We describe a new method of comparing different climate forcing agents (e.g., CO2, CH4, and solar irradiance) that avoids many of the ambiguities introduced by temperature-related climate feedbacks. This is achieved by introducing an explicit feedback loop external to the climate model that adjusts one forcing agent to balance another while keeping global mean surface temperature constant. Compared to current approaches, this method has two main advantages: (i) the need to define radiative forcing is bypassed and (ii) by maintaining roughly constant global mean temperature, the effects of state dependence on internal feedback strengths are minimized. We demonstrate this approach for several different forcing agents and derive the relationships between these forcing agents in two climate models; comparisons between forcing agents are highly linear in concordance with predicted functional forms. Transitivity of the relationships between the forcing agents appears to hold within a wide range of forcing. The relationships between the forcing agents obtained from this method are consistent across both models but differ from relationships that would be obtained from calculations of radiative forcing, highlighting the importance of controlling for surface temperature feedback effects when separating radiative forcing and climate response.
Bridging the PSI Knowledge Gap: A Multi-Scale Approach
Wirth, Brian D
2015-01-08
Plasma-surface interactions (PSI) pose an immense scientific hurdle in magnetic confinement fusion and our present understanding of PSI in confinement environments is highly inadequate; indeed, a recent Fusion Energy Sciences Advisory Committee report found that 4 out of the 5 top five fusion knowledge gaps were related to PSI. The time is appropriate to develop a concentrated and synergistic science effort that would expand, exploit and integrate the wealth of laboratory ion-beam and plasma research, as well as exciting new computational tools, towards the goal of bridging the PSI knowledge gap. This effort would broadly advance plasma and material sciences, while providing critical knowledge towards progress in fusion PSI. This project involves the development of a Science Center focused on a new approach to PSI science; an approach that both exploits access to state-of-the-art PSI experiments and modeling, as well as confinement devices. The organizing principle is to develop synergistic experimental and modeling tools that treat the truly coupled multi-scale aspect of the PSI issues in confinement devices. This is motivated by the simple observation that while typical lab experiments and models allow independent manipulation of controlling variables, the confinement PSI environment is essentially self-determined with few outside controls. This means that processes that may be treated independently in laboratory experiments, because they involve vastly different physical and time scales, will now affect one another in the confinement environment. Also, lab experiments cannot simultaneously match all exposure conditions found in confinement devices typically forcing a linear extrapolation of lab results. At the same time programmatic limitations prevent confinement experiments alone from answering many key PSI questions. The resolution to this problem is to usefully exploit access to PSI science in lab devices, while retooling our thinking from a linear and de-coupled extrapolation to a multi-scale, coupled approach. The PSI Plasma Center consisted of three equal co-centers; one located at the MIT Plasma Science and Fusion Center, one at UC San Diego Center for Energy Research and one at the UC Berkeley Department of Nuclear Engineering, which moved to the University of Tennessee, Knoxville (UTK) with Professor Brian Wirth in July 2010. The Center had three co-directors: Prof. Dennis Whyte led the MIT co-center, the UCSD co-center was led by Dr. Russell Doerner, and Prof. Brian Wirth led the UCB/UTK center. The directors have extensive experience in PSI and material research, and have been internationally recognized in the magnetic fusion, materials and plasma research fields. The co-centers feature keystone PSI experimental and modeling facilities dedicated to PSI science: the DIONISOS/CLASS facility at MIT, the PISCES facility at UCSD, and the state-of-the-art numerical modeling capabilities at UCB/UTK. A collaborative partner in the center is Sandia National Laboratory at Livermore (SNL/CA), which has extensive capabilities with low energy ion beams and surface diagnostics, as well as supporting plasma facilities, including the Tritium Plasma Experiment, all of which significantly augment the Center. Interpretive, continuum material models are available through SNL/CA, UCSD and MIT. The participating institutions of MIT, UCSD, UCB/UTK, SNL/CA and LLNL brought a formidable array of experimental tools and personnel abilities into the PSI Plasma Center. Our work has focused on modeling activities associated with plasma surface interactions that are involved in effects of He and H plasma bombardment on tungsten surfaces. This involved performing computational material modeling of the surface evolution during plasma bombardment using molecular dynamics modeling. The principal outcomes of the research efforts within the combined experimental – modeling PSI center are to provide a knowledgebase of the mechanisms of surface degradation, and the influence of the surface on plasma conditions.
Other: Multiscale Simulation of Blood Flow in Brain Arteries with an
Office of Scientific and Technical Information (OSTI)
Aneurysm | ScienceCinema Other: Multiscale Simulation of Blood Flow in Brain Arteries with an Aneurysm Citation Details Title: Multiscale Simulation of Blood Flow in Brain Arteries with an Aneurysm
Next Generation Multi-Scale Quantum Simulation Software for Strongly Correlated Materials
Jarrell, Mark
2014-11-18
The goal of this project was to develop a new formalism for the correlated electron problem, which we call, the Multi Scale Many Body formalism. This report will focus on the work done at the Louisiana State University (LSU) since the mid term report. The LSU group moved from the University of Cincinnati (UC) to LSU in the summer of 2008. In the last full year at UC, only half of the funds were received and it took nearly two years for the funds to be transferred from UC to LSU . This effectively shut down the research at LSU until the transfer was completed in 2011, there were also two no-cost extensions of the grant until August of this year. The grant ended for the other SciDAC partners at Davis and ORNL in 2011. Since the mid term report, the LSU group has published 19 papers [P1-P19] acknowledging this SciDAC, which are listed below. In addition, numerous invited talked acknowledged the SciDAC. Below, we will summarize the work at LSU since the mid-term report and mainly since funding resumed. The projects include the further development of multi-scale methods for correlated systems (1), the study of quantum criticality at finite doping in the Hubbard model (2), the description of a promising new method to study Anderson localization with a million-fold reduction of computational complexity!, the description of other projects (4), and (5) a workshop to close out the project that brought together exascale program developers (Stellar, MPI, OpenMP,...) with applications developers.
Methods for Developing Emissions Scenarios for Integrated Assessment Models
Prinn, Ronald; Webster, Mort
2007-08-20
The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.
Modeling granular phosphor screens by Monte Carlo methods
Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.
2006-12-15
The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd{sub 2}O{sub 2}S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd{sub 2}O{sub 2}S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd{sub 2}O{sub 2}S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)
A robust absorbing layer method for anisotropic seismic wave modeling
Mtivier, L.; Brossier, R.; Labb, S.; Operto, S.; Virieux, J.
2014-12-15
When applied to wave propagation modeling in anisotropic media, Perfectly Matched Layers (PML) exhibit instabilities. Incoming waves are amplified instead of being absorbed. Overcoming this difficulty is crucial as in many seismic imaging applications, accounting accurately for the subsurface anisotropy is mandatory. In this study, we present the SMART layer method as an alternative to PML approach. This method is based on the decomposition of the wavefield into components propagating inward and outward the domain of interest. Only outgoing components are damped. We show that for elastic and acoustic wave propagation in Transverse Isotropic media, the SMART layer is unconditionally dissipative: no amplification of the wavefield is possible. The SMART layers are not perfectly matched, therefore less accurate than conventional PML. However, a reasonable increase of the layer size yields an accuracy similar to PML. Finally, we illustrate that the selective damping strategy on which is based the SMART method can prevent the generation of spurious S-waves by embedding the source in a small zone where only S-waves are damped.
Multi-Scale Initial Conditions For Cosmological Simulations
Hahn, Oliver; Abel, Tom; /KIPAC, Menlo Park /ZAH, Heidelberg /HITS, Heidelberg
2011-11-04
We discuss a new algorithm to generate multi-scale initial conditions with multiple levels of refinements for cosmological 'zoom-in' simulations. The method uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). The new algorithm achieves rms relative errors of the order of 10{sup -4} for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier-space-induced interference ringing. An optional hybrid multi-grid and Fast Fourier Transform (FFT) based scheme is introduced which has identical Fourier-space behaviour as traditional approaches. Using a suite of re-simulations of a galaxy cluster halo our real-space-based approach is found to reproduce correlation functions, density profiles, key halo properties and subhalo abundances with per cent level accuracy. Finally, we generalize our approach for two-component baryon and dark-matter simulations and demonstrate that the power spectrum evolution is in excellent agreement with linear perturbation theory. For initial baryon density fields, it is suggested to use the local Lagrangian approximation in order to generate a density field for mesh-based codes that is consistent with the Lagrangian perturbation theory instead of the current practice of using the Eulerian linearly scaled densities.
Assessment of Molecular Modeling & Simulation
2002-01-03
This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.
Neural node network and model, and method of teaching same
Parlos, A.G.; Atiya, A.F.; Fernandez, B.; Tsai, W.K.; Chong, K.T.
1995-12-26
The present invention is a fully connected feed forward network that includes at least one hidden layer. The hidden layer includes nodes in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device occurring in the feedback path (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit from all the other nodes within the same layer. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing. 21 figs.
Neural node network and model, and method of teaching same
Parlos, Alexander G.; Atiya, Amir F.; Fernandez, Benito; Tsai, Wei K.; Chong, Kil T.
1995-01-01
The present invention is a fully connected feed forward network that includes at least one hidden layer 16. The hidden layer 16 includes nodes 20 in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device 24 occurring in the feedback path 22 (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit 36 from all the other nodes within the same layer 16. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing.
A Perspective on Coupled Multiscale Simulation and Validation in Nuclear Materials
M. P. Short; D. Gaston; C. R. Stanek; S. Yip
2014-01-01
The field of nuclear materials encompasses numerous opportunities to address and ultimately solve longstanding industrial problems by improving the fundamental understanding of materials through the integration of experiments with multiscale modeling and high-performance simulation. A particularly noteworthy example is an ongoing study of axial power distortions in a nuclear reactor induced by corrosion deposits, known as CRUD (Chalk River unidentified deposits). We describe how progress is being made toward achieving scientific advances and technological solutions on two fronts. Specifically, the study of thermal conductivity of CRUD phases has augmented missing data as well as revealed new mechanisms. Additionally, the development of a multiscale simulation framework shows potential for the validation of a new capability to predict the power distribution of a reactor, in effect direct evidence of technological impact. The material- and system-level challenges identified in the study of CRUD are similar to other well-known vexing problems in nuclear materials, such as irradiation accelerated corrosion, stress corrosion cracking, and void swelling; they all involve connecting materials science fundamentals at the atomistic- and mesoscales to technology challenges at the macroscale.
Trabanino, Rene J; Vaidehi, Nagarajan; Hall, Spencer E; Goddard, William A; Floriano, Wely
2013-02-05
The invention provides computer-implemented methods and apparatus implementing a hierarchical protocol using multiscale molecular dynamics and molecular modeling methods to predict the presence of transmembrane regions in proteins, such as G-Protein Coupled Receptors (GPCR), and protein structural models generated according to the protocol. The protocol features a coarse grain sampling method, such as hydrophobicity analysis, to provide a fast and accurate procedure for predicting transmembrane regions. Methods and apparatus of the invention are useful to screen protein or polynucleotide databases for encoded proteins with transmembrane regions, such as GPCRs.
Symmetry Methods for a Geophysical Mass Flow Model
Torrisi, Mariano; Tracina, Rita
2011-09-14
In the framework of symmetry analysis, the class of 2 x 2 PDE systems to whom belong the Savage and Hutter model and the Iverson model is considered. New classes of exact solutions are found.
Piri, Mohammad
2014-03-31
Under this project, a multidisciplinary team of researchers at the University of Wyoming combined state-of-the-art experimental studies, numerical pore- and reservoir-scale modeling, and high performance computing to investigate trapping mechanisms relevant to geologic storage of mixed scCO{sub 2} in deep saline aquifers. The research included investigations in three fundamental areas: (i) the experimental determination of two-‐phase flow relative permeability functions, relative permeability hysteresis, and residual trapping under reservoir conditions for mixed scCO{sub 2}-‐brine systems; (ii) improved understanding of permanent trapping mechanisms; (iii) scientifically correct, fine grid numerical simulations of CO{sub 2} storage in deep saline aquifers taking into account the underlying rock heterogeneity. The specific activities included: (1) Measurement of reservoir-‐conditions drainage and imbibition relative permeabilities, irreducible brine and residual mixed scCO{sub 2} saturations, and relative permeability scanning curves (hysteresis) in rock samples from RSU; (2) Characterization of wettability through measurements of contact angles and interfacial tensions under reservoir conditions; (3) Development of physically-‐based dynamic core-‐scale pore network model; (4) Development of new, improved high-‐ performance modules for the UW-‐team simulator to provide new capabilities to the existing model to include hysteresis in the relative permeability functions, geomechanical deformation and an equilibrium calculation (Both pore-‐ and core-‐scale models were rigorously validated against well-‐characterized core-‐ flooding experiments); and (5) An analysis of long term permanent trapping of mixed scCO{sub 2} through high-‐resolution numerical experiments and analytical solutions. The analysis takes into account formation heterogeneity, capillary trapping, and relative permeability hysteresis.
Multiscale Toxicology- Building the Next Generation Tools for Toxicology
Retterer, S. T.; Holsapple, M. P.
2013-10-31
A Cooperative Research and Development Agreement (CRADA) was established between Battelle Memorial Institute (BMI), Pacific Northwest National Laboratory (PNNL), Oak Ridge National Laboratory (ORNL), Brookhaven National Laboratory (BNL), Lawrence Livermore National Laboratory (LLNL) with the goal of combining the analytical and synthetic strengths of the National Laboratories with BMI's expertise in basic and translational medical research to develop a collaborative pipeline and suite of high throughput and imaging technologies that could be used to provide a more comprehensive understanding of material and drug toxicology in humans. The Multi-Scale Toxicity Initiative (MSTI), consisting of the team members above, was established to coordinate cellular scale, high-throughput in vitro testing, computational modeling and whole animal in vivo toxicology studies between MSTI team members. Development of a common, well-characterized set of materials for testing was identified as a crucial need for the initiative. Two research tracks were established by BMI during the course of the CRADA. The first research track focused on the development of tools and techniques for understanding the toxicity of nanomaterials, specifically inorganic nanoparticles (NPs). ORNL"s work focused primarily on the synthesis, functionalization and characterization of a common set of NPs for dissemination to the participating laboratories. These particles were synthesized to retain the same surface characteristics and size, but to allow visualization using the variety of imaging technologies present across the team. Characterization included the quantitative analysis of physical and chemical properties of the materials as well as the preliminary assessment of NP toxicity using commercially available toxicity screens and emerging optical imaging strategies. Additional efforts examined the development of high-throughput microfluidic and imaging assays for measuring NP uptake, localization, and toxicity in vitro. The second research track within the MSTI CRADA focused on the development of ex vivo animal models for examining druginduced cardiotoxicity. ORNL's role in the second track was limited initially, but was later expanded to include the development of microfluidic platforms that might facilitate the translation of Cardiac 'Microwire' technologies developed at the University of Toronto into a functional platform for drug screening and predictive assessment of cardiotoxicity via highthroughput measurements of contractility. This work was coordinated by BMI with the Centre for the Commercialization of Regenerative Medicine (CCRM) and the University of Toronto (U Toronto). This partnership was expanded and culminated in the submission of proposal to Work for Others (WFO) agencies to explore the development of a broader set of microphysiological systems, a so call human-on-a-chip, that could be used for toxicity screening and the evaluation of bio-threat countermeasures.
Multi-scale framework for the accelerated design of high-efficiency...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Multi-scale framework for the accelerated design of high-efficiency organic photovoltaic cells Organic and hybrid organicinorganic solar cells (OSC) offer a promising low-cost...
Lifetime statistics of quantum chaos studied by a multiscale analysis
Di Falco, A.; Krauss, T. F. [School of Physics and Astronomy, University of St. Andrews, North Haugh, St. Andrews, KY16 9SS (United Kingdom); Fratalocchi, A. [PRIMALIGHT, Faculty of Electrical Engineering, Applied Mathematics and Computational Science, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900 (Saudi Arabia)
2012-04-30
In a series of pump and probe experiments, we study the lifetime statistics of a quantum chaotic resonator when the number of open channels is greater than one. Our design embeds a stadium billiard into a two dimensional photonic crystal realized on a silicon-on-insulator substrate. We calculate resonances through a multiscale procedure that combines energy landscape analysis and wavelet transforms. Experimental data is found to follow the universal predictions arising from random matrix theory with an excellent level of agreement.
MREG V1.1 : a multi-scale image registration algorithm for SAR applications.
Eichel, Paul H.
2013-08-01
MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962 leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.
Yortsos, Y.C.
2001-05-29
This report is an investigation of various multi-phase and multiscale transport and reaction processes associated with heavy oil recovery. The thrust areas of the project include the following: Internal drives, vapor-liquid flows, combustion and reaction processes, fluid displacements and the effect of instabilities and heterogeneities and the flow of fluids with yield stress. These find respective applications in foamy oils, the evolution of dissolved gas, internal steam drives, the mechanics of concurrent and countercurrent vapor-liquid flows, associated with thermal methods and steam injection, such as SAGD, the in-situ combustion, the upscaling of displacements in heterogeneous media and the flow of foams, Bingham plastics and heavy oils in porous media and the development of wormholes during cold production.
Yortsos, Yanis C.
2001-08-07
This project is an investigation of various multi-phase and multiscale transport and reaction processes associated with heavy oil recovery. The thrust areas of the project include the following: Internal drives, vapor-liquid flows, combustion and reaction processes, fluid displacements and the effect of instabilities and heterogeneities and the flow of fluids with yield stress. These find respective applications in foamy oils, the evolution of dissolved gas, internal steam drives, the mechanics of concurrent and countercurrent vapor-liquid flows, associated with thermal methods and steam injection, such as SAGD, the in-situ combustion, the upscaling of displacements in heterogeneous media and the flow of foams, Bingham plastics and heavy oils in porous media and the development of wormholes during cold production.
MULTI-SCALE MORPHOLOGICAL ANALYSIS OF SDSS DR5 SURVEY USING THE METRIC SPACE TECHNIQUE
Wu Yongfeng; Batuski, David J.; Khalil, Andre
2009-12-20
Following the novel development and adaptation of the Metric Space Technique (MST), a multi-scale morphological analysis of the Sloan Digital Sky Survey (SDSS) Data Release 5 (DR5) was performed. The technique was adapted to perform a space-scale morphological analysis by filtering the galaxy point distributions with a smoothing Gaussian function, thus giving quantitative structural information on all size scales between 5 and 250 Mpc. The analysis was performed on a dozen slices of a volume of space containing many newly measured galaxies from the SDSS DR5 survey. Using the MST, observational data were compared to galaxy samples taken from N-body simulations with current best estimates of cosmological parameters and from random catalogs. By using the maximal ranking method among MST output functions, we also develop a way to quantify the overall similarity of the observed samples with the simulated samples.
Method of modeling transmissions for real-time simulation
Hebbale, Kumaraswamy V.
2012-09-25
A transmission modeling system includes an in-gear module that determines an in-gear acceleration when a vehicle is in gear. A shift module determines a shift acceleration based on a clutch torque when the vehicle is shifting between gears. A shaft acceleration determination module determines a shaft acceleration based on at least one of the in-gear acceleration and the shift acceleration.
Search Method for Real-time Knowledge Discovery Modeled on the...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Search Method for Real-time Knowledge Discovery Modeled on the Human Brain Oak Ridge ... information processing properties of the human brain to computational knowledge discovery. ...
System and method for modeling and analyzing complex scenarios
Shevitz, Daniel Wolf
2013-04-09
An embodiment of the present invention includes a method for analyzing and solving possibility tree. A possibility tree having a plurality of programmable nodes is constructed and solved with a solver module executed by a processor element. The solver module executes the programming of said nodes, and tracks the state of at least a variable through a branch. When a variable of said branch is out of tolerance with a parameter, the solver disables remaining nodes of the branch and marks the branch as an invalid solution. The valid solutions are then aggregated and displayed as valid tree solutions.
Utilizing CLASIC observations and multiscale models to study...
Office of Scientific and Technical Information (OSTI)
... Close Cite: Bibtex Format Close 0 pages in this document matching the terms "" Search For Terms: Enter terms in the toolbar above to search the full text of this document for ...
Radiation Damage in Nuclear Fuel for Advanced Burner Reactors: Modeling and Experimental Validation
Jensen, Niels Gronbech; Asta, Mark; Ozolins, Nigel Browning'Vidvuds; de Walle, Axel van; Wolverton, Christopher
2011-12-29
The consortium has completed its existence and we are here highlighting work and accomplishments. As outlined in the proposal, the objective of the work was to advance the theoretical understanding of advanced nuclear fuel materials (oxides) toward a comprehensive modeling strategy that incorporates the different relevant scales involved in radiation damage in oxide fuels. Approaching this we set out to investigate and develop a set of directions: 1) Fission fragment and ion trajectory studies through advanced molecular dynamics methods that allow for statistical multi-scale simulations. This work also includes an investigation of appropriate interatomic force fields useful for the energetic multi-scale phenomena of high energy collisions; 2) Studies of defect and gas bubble formation through electronic structure and Monte Carlo simulations; and 3) an experimental component for the characterization of materials such that comparisons can be obtained between theory and experiment.
Review of Wind Energy Forecasting Methods for Modeling Ramping Events
Wharton, S; Lundquist, J K; Marjanovic, N; Williams, J L; Rhodes, M; Chow, T K; Maxwell, R
2011-03-28
Tall onshore wind turbines, with hub heights between 80 m and 100 m, can extract large amounts of energy from the atmosphere since they generally encounter higher wind speeds, but they face challenges given the complexity of boundary layer flows. This complexity of the lowest layers of the atmosphere, where wind turbines reside, has made conventional modeling efforts less than ideal. To meet the nation's goal of increasing wind power into the U.S. electrical grid, the accuracy of wind power forecasts must be improved. In this report, the Lawrence Livermore National Laboratory, in collaboration with the University of Colorado at Boulder, University of California at Berkeley, and Colorado School of Mines, evaluates innovative approaches to forecasting sudden changes in wind speed or 'ramping events' at an onshore, multimegawatt wind farm. The forecast simulations are compared to observations of wind speed and direction from tall meteorological towers and a remote-sensing Sound Detection and Ranging (SODAR) instrument. Ramping events, i.e., sudden increases or decreases in wind speed and hence, power generated by a turbine, are especially problematic for wind farm operators. Sudden changes in wind speed or direction can lead to large power generation differences across a wind farm and are very difficult to predict with current forecasting tools. Here, we quantify the ability of three models, mesoscale WRF, WRF-LES, and PF.WRF, which vary in sophistication and required user expertise, to predict three ramping events at a North American wind farm.
Robertson, Eric P; Christiansen, Richard L.
2007-10-23
A method of optically determining a change in magnitude of at least one dimensional characteristic of a sample in response to a selected chamber environment. A magnitude of at least one dimension of the at least one sample may be optically determined subsequent to altering the at least one environmental condition within the chamber. A maximum change in dimension of the at least one sample may be predicted. A dimensional measurement apparatus for indicating a change in at least one dimension of at least one sample. The dimensional measurement apparatus may include a housing with a chamber configured for accommodating pressure changes and an optical perception device for measuring a dimension of at least one sample disposed in the chamber. Methods of simulating injection of a gas into a subterranean formation, injecting gas into a subterranean formation, and producing methane from a coal bed are also disclosed.
Robertson, Eric P; Christiansen, Richard L.
2007-05-29
A method of optically determining a change in magnitude of at least one dimensional characteristic of a sample in response to a selected chamber environment. A magnitude of at least one dimension of the at least one sample may be optically determined subsequent to altering the at least one environmental condition within the chamber. A maximum change in dimension of the at least one sample may be predicted. A dimensional measurement apparatus for indicating a change in at least one dimension of at least one sample. The dimensional measurement apparatus may include a housing with a chamber configured for accommodating pressure changes and an optical perception device for measuring a dimension of at least one sample disposed in the chamber. Methods of simulating injection of a gas into a subterranean formation, injecting gas into a subterranean formation, and producing methane from a coal bed are also disclosed.
A multilingual programming model for coupled systems.
Ong, E. T.; Larson, J. W.; Norris, B.; Tobis, M.; Steder, M.; Jacob, R. L.; Mathematics and Computer Science; Univ. of Wisconsin; Univ. of Chicago; The Australian National Univ.
2008-01-01
Multiphysics and multiscale simulation systems share a common software requirement-infrastructure to implement data exchanges between their constituent parts-often called the coupling problem. On distributed-memory parallel platforms, the coupling problem is complicated by the need to describe, transfer, and transform distributed data, known as the parallel coupling problem. Parallel coupling is emerging as a new grand challenge in computational science as scientists attempt to build multiscale and multiphysics systems on parallel platforms. An additional coupling problem in these systems is language interoperability between their constituent codes. We have created a multilingual parallel coupling programming model based on a successful open-source parallel coupling library, the Model Coupling Toolkit (MCT). This programming model's capabilities reach beyond MCT's native Fortran implementation to include bindings for the C++ and Python programming languages. We describe the method used to generate the interlanguage bindings. This approach enables an object-based programming model for implementing parallel couplings in non-Fortran coupled systems and in systems with language heterogeneity. We describe the C++ and Python versions of the MCT programming model and provide short examples. We report preliminary performance results for the MCT interpolation benchmark. We describe a major Python application that uses the MCT Python bindings, a Python implementation of the control and coupling infrastructure for the community climate system model. We conclude with a discussion of the significance of this work to productivity computing in multidisciplinary computational science.
Three-Dimensional Lithium-Ion Battery Model (Presentation)
Kim, G. H.; Smith, K.
2008-05-01
Nonuniform battery physics can cause unexpected performance and life degradations in lithium-ion batteries; a three-dimensional cell performance model was developed by integrating an electrode-scale submodel using a multiscale modeling scheme.
Richardson, John G.
2009-11-17
An impedance estimation method includes measuring three or more impedances of an object having a periphery using three or more probes coupled to the periphery. The three or more impedance measurements are made at a first frequency. Three or more additional impedance measurements of the object are made using the three or more probes. The three or more additional impedance measurements are made at a second frequency different from the first frequency. An impedance of the object at a point within the periphery is estimated based on the impedance measurements and the additional impedance measurements.
Synthesis of Numerical Methods for Modeling Wave Energy Converter-Point Absorbers: Preprint
Li, Y.; Yu, Y. H.
2012-05-01
During the past few decades, wave energy has received significant attention among all ocean energy formats. Industry has proposed hundreds of prototypes such as an oscillating water column, a point absorber, an overtopping system, and a bottom-hinged system. In particular, many researchers have focused on modeling the floating-point absorber as the technology to extract wave energy. Several modeling methods have been used such as the analytical method, the boundary-integral equation method, the Navier-Stokes equations method, and the empirical method. However, no standardized method has been decided. To assist the development of wave energy conversion technologies, this report reviews the methods for modeling the floating-point absorber.
Sereda, Yuriy V.; Ortoleva, Peter J.
2014-04-07
A closed kinetic equation for the single-particle density of a viscous simple liquid is derived using a variational method for the Liouville equation and a coarse-grained mean-field (CGMF) ansatz. The CGMF ansatz is based on the notion that during the characteristic time of deformation a given particle interacts with many others so that it experiences an average interaction. A trial function for the N-particle probability density is constructed using a multiscale perturbation method and the CGMF ansatz is applied to it. The multiscale perturbation scheme is based on the ratio of the average nearest-neighbor atom distance to the total size of the assembly. A constraint on the initial condition is discovered which guarantees that the kinetic equation is mass-conserving and closed in the single-particle density. The kinetic equation has much of the character of the Vlasov equation except that true viscous, and not Landau, damping is accounted for. The theory captures condensation kinetics and takes much of the character of the Gross-Pitaevskii equation in the weak-gradient short-range force limit.
Proposed SPAR Modeling Method for Quantifying Time Dependent Station Blackout Cut Sets
John A. Schroeder
2010-06-01
Abstract: The U.S. Nuclear Regulatory Commissions (USNRCs) Standardized Plant Analysis Risk (SPAR) models and industry risk models take similar approaches to analyzing the risk associated with loss of offsite power and station blackout (LOOP/SBO) events at nuclear reactor plants. In both SPAR models and industry models, core damage risk resulting from a LOOP/SBO event is analyzed using a combination of event trees and fault trees that produce cut sets that are, in turn, quantified to obtain a numerical estimate of the resulting core damage risk. A proposed SPAR method for quantifying the time-dependent cut sets is sometimes referred to as a convolution method. The SPAR method reflects assumptions about the timing of emergency diesel failures, the timing of subsequent attempts at emergency diesel repair, and the timing of core damage that may be different than those often used in industry models. This paper describes the proposed SPAR method.
New methods for identifying value added by a regional climate model |
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Argonne National Laboratory methods for identifying value added by a regional climate model By Brian Grabowski * January 26, 2015 Tweet EmailPrint Regional climate models (RCMs) are a standard tool for downscaling climate forecasts to finer spatial scales. The evaluation of RCMs against observational data is an important step in building confidence in the use of RCMs for future prediction. In addition to model performance in climatological means and marginal distributions, a model's ability
Methods to Register Models and Input/Output Parameters for Integrated Modeling
Droppo, James G.; Whelan, Gene; Tryby, Michael E.; Pelton, Mitchell A.; Taira, Randal Y.; Dorow, Kevin E.
2010-07-10
Significant resources can be required when constructing integrated modeling systems. In a typical application, components (e.g., models and databases) created by different developers are assimilated, requiring the framework’s functionality to bridge the gap between the user’s knowledge of the components being linked. The framework, therefore, needs the capability to assimilate a wide range of model-specific input/output requirements as well as their associated assumptions and constraints. The process of assimilating such disparate components into an integrated modeling framework varies in complexity and difficulty. Several factors influence the relative ease of assimilating components, including, but not limited to, familiarity with the components being assimilated, familiarity with the framework and its tools that support the assimilation process, level of documentation associated with the components and the framework, and design structure of the components and framework. This initial effort reviews different approaches for assimilating models and their model-specific input/output requirements: 1) modifying component models to directly communicate with the framework (i.e., through an Application Programming Interface), 2) developing model-specific external wrappers such that no component model modifications are required, 3) using parsing tools to visually map pre-existing input/output files, and 4) describing and linking models as dynamic link libraries. Most of these approaches are illustrated using the widely distributed modeling system called Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES). The review concludes that each has its strengths and weakness, the factors that determine which approaches work best in a given application.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
accelerated paper Systematic Method for the Kinetic Modeling of Temporally Resolved Hyperspectral Microscope Images of Fluorescently Labeled Cells PATRICK J. CUTLER, DAVID M. HAALAND, and PAUL J. GEMPERLINE* Department of Chemistry, East Carolina University, Greenville, North Carolina 27858 (P.J.C., P.J.G.); and Sandia National Laboratories, Albuquerque, New Mexico 87185-0895 (D.M.H.) In this paper we report the application of a novel method for fitting kinetic models to temporally resolved
A Comparison of Multiscale Variations of Decade-long Cloud Fractions from
Office of Scientific and Technical Information (OSTI)
Six Different Platforms over the Southern Great Plains in the United States (Journal Article) | SciTech Connect A Comparison of Multiscale Variations of Decade-long Cloud Fractions from Six Different Platforms over the Southern Great Plains in the United States Citation Details In-Document Search Title: A Comparison of Multiscale Variations of Decade-long Cloud Fractions from Six Different Platforms over the Southern Great Plains in the United States This study investigates 1997-2011
EFFECTS OF PORE STRUCTURE CHANGE AND MULTI-SCALE HETEROGENEITY ON
Office of Scientific and Technical Information (OSTI)
CONTAMINANT TRANSPORT AND REACTION RATE UPSCALING (Technical Report) | SciTech Connect Technical Report: EFFECTS OF PORE STRUCTURE CHANGE AND MULTI-SCALE HETEROGENEITY ON CONTAMINANT TRANSPORT AND REACTION RATE UPSCALING Citation Details In-Document Search Title: EFFECTS OF PORE STRUCTURE CHANGE AND MULTI-SCALE HETEROGENEITY ON CONTAMINANT TRANSPORT AND REACTION RATE UPSCALING This project addressed the scaling of geochemical reactions to core and field scales, and the interrelationship
COLLOQUIUM - PLEASE NOTE SPECIAL DATE/TIME: The Magnetospheric MultiScale
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Mission Investigation of Magnetic Reconnection | Princeton Plasma Physics Lab February 21, 2013, 10:30am to 12:00pm Colloquia MBG Auditorium COLLOQUIUM - PLEASE NOTE SPECIAL DATE/TIME: The Magnetospheric MultiScale Mission Investigation of Magnetic Reconnection Professor Roy Torbert University of New Hampshire Presentation: File TC21FEB2013_RBTorbert_COMPRESSED.pptx In late fall 2014, NASA will launch the Magnetospheric Multiscale (MMS) mission to study the kinetic physics of magnetic
Multiscale Multiphysics Developments for Accident Tolerant Fuel Concepts
Gamble, K. A.; Hales, J. D.; Yu, J.; Zhang, Y.; Bai, X.; Andersson, D.; Patra, A.; Wen, W.; Tome, C.; Baskes, M.; Martinez, E.; Stanek, C. R.; Miao, Y.; Ye, B.; Hofman, G. L.; Yacout, A. M.; Liu, W.
2015-09-01
U_{3}Si_{2} and iron-chromium-aluminum (Fe-Cr-Al) alloys are two of many proposed accident tolerant fuel concepts for the fuel and cladding, respectively. The behavior of these materials under normal operating and accident reactor conditions is not well known. As part of the Department of Energy’s Accident Tolerant Fuel High Impact Problem program significant work has been conducted to investigate the U_{3}Si_{2} and FeCrAl behavior under reactor conditions. This report presents the multiscale and multiphysics effort completed in fiscal year 2015. The report is split into four major categories including Density Functional Theory Developments, Molecular Dynamics Developments, Mesoscale Developments, and Engineering Scale Developments. The work shown here is a compilation of a collaborative effort between Idaho National Laboratory, Los Alamos National Laboratory, Argonne National Laboratory and Anatech Corp.
Multi-scale evaporator architectures for geothermal binary power plants
Sabau, Adrian S; Nejad, Ali; Klett, James William; Bejan, Adrian
2016-01-01
In this paper, novel geometries of heat exchanger architectures are proposed for evaporators that are used in Organic Rankine Cycles. A multi-scale heat exchanger concept was developed by employing successive plenums at several length-scale levels. Flow passages contain features at both macro-scale and micro-scale, which are designed from Constructal Theory principles. Aside from pumping power and overall thermal resistance, several factors were considered in order to fully assess the performance of the new heat exchangers, such as weight of metal structures, surface area per unit volume, and total footprint. Component simulations based on laminar flow correlations for supercritical R134a were used to obtain performance indicators.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
Mixed-RKDG Finite Element Methods for the 2-D Hydrodynamic Model for Semiconductor Device Simulation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Chen, Zhangxin; Cockburn, Bernardo; Jerome, Joseph W.; Shu, Chi-Wang
1995-01-01
In this paper we introduce a new method for numerically solving the equations of the hydrodynamic model for semiconductor devices in two space dimensions. The method combines a standard mixed finite element method, used to obtain directly an approximation to the electric field, with the so-called Runge-Kutta Discontinuous Galerkin (RKDG) method, originally devised for numerically solving multi-dimensional hyperbolic systems of conservation laws, which is applied here to the convective part of the equations. Numerical simulations showing the performance of the new method are displayed, and the results compared with those obtained by using Essentially Nonoscillatory (ENO) finite difference schemes. Frommore » the perspective of device modeling, these methods are robust, since they are capable of encompassing broad parameter ranges, including those for which shock formation is possible. The simulations presented here are for Gallium Arsenide at room temperature, but we have tested them much more generally with considerable success.« less
Three-dimensional Dendritic Needle Network model with application...
Office of Scientific and Technical Information (OSTI)
We present a three-dimensional (3D) extension of a previously proposed multi-scale ... of a given thickness, one can directly extend the DNN approach to 3D modeling. ...
Gardner, Shea Nicole
2007-10-23
A method and system for tailoring treatment regimens to individual patients with diseased cells exhibiting evolution of resistance to such treatments. A mathematical model is provided which models rates of population change of proliferating and quiescent diseased cells using cell kinetics and evolution of resistance of the diseased cells, and pharmacokinetic and pharmacodynamic models. Cell kinetic parameters are obtained from an individual patient and applied to the mathematical model to solve for a plurality of treatment regimens, each having a quantitative efficacy value associated therewith. A treatment regimen may then be selected from the plurlaity of treatment options based on the efficacy value.
A method for the quantification of model form error associated with physical systems.
Wallen, Samuel P.; Brake, Matthew Robert
2014-03-01
In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.
A review of existing models and methods to estimate employment effects of pollution control policies
Darwin, R.F.; Nesse, R.J.
1988-02-01
The purpose of this paper is to provide information about existing models and methods used to estimate coal mining employment impacts of pollution control policies. The EPA is currently assessing the consequences of various alternative policies to reduce air pollution. One important potential consequence of these policies is that coal mining employment may decline or shift from low-sulfur to high-sulfur coal producing regions. The EPA requires models that can estimate the magnitude and cost of these employment changes at the local level. This paper contains descriptions and evaluations of three models and methods currently used to estimate the size and cost of coal mining employment changes. The first model reviewed is the Coal and Electric Utilities Model (CEUM), a well established, general purpose model that has been used by the EPA and other groups to simulate air pollution control policies. The second model reviewed is the Advanced Utility Simulation Model (AUSM), which was developed for the EPA specifically to analyze the impacts of air pollution control policies. Finally, the methodology used by Arthur D. Little, Inc. to estimate the costs of alternative air pollution control policies for the Consolidated Coal Company is discussed. These descriptions and evaluations are based on information obtained from published reports and from draft documentation of the models provided by the EPA. 12 refs., 1 fig.
Computational Modeling | Bioenergy | NREL
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Computational Modeling NREL uses computational modeling to increase the efficiency of biomass conversion by rational design using multiscale modeling, applying theoretical approaches, and testing scientific hypotheses. model of enzymes wrapping on cellulose; colorful circular structures entwined through blue strands Cellulosomes are complexes of protein scaffolds and enzymes that are highly effective in decomposing biomass. This is a snapshot of a coarse-grain model of complex cellulosome
Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less
Multi-Scale Investigation of Sheared Flows In Magnetized Plasmas
Edward, Jr., Thomas
2014-09-19
Flows parallel and perpendicular to magnetic fields in a plasma are important phenomena in many areas of plasma science research. The presence of these spatially inhomogeneous flows is often associated with the stability of the plasma. In fusion plasmas, these sheared flows can be stabilizing while in space plasmas, these sheared flows can be destabilizing. Because of this, there is broad interest in understanding the coupling between plasma stability and plasma flows. This research project has engaged in a study of the plasma response to spatially inhomogeneous plasma flows using three different experimental devices: the Auburn Linear Experiment for Instability Studies (ALEXIS) and the Compact Toroidal Hybrid (CTH) stellarator devices at Auburn University, and the Space Plasma Simulation Chamber (SPSC) at the Naval Research Laboratory. This work has shown that there is a commonality of the plasma response to sheared flows across a wide range of plasma parameters and magnetic field geometries. The goal of this multi-device, multi-scale project is to understand how sheared flows established by the same underlying physical mechanisms lead to different plasma responses in fusion, laboratory, and space plasmas.
Control method and system for hydraulic machines employing a dynamic joint motion model
Danko, George
2011-11-22
A control method and system for controlling a hydraulically actuated mechanical arm to perform a task, the mechanical arm optionally being a hydraulically actuated excavator arm. The method can include determining a dynamic model of the motion of the hydraulic arm for each hydraulic arm link by relating the input signal vector for each respective link to the output signal vector for the same link. Also the method can include determining an error signal for each link as the weighted sum of the differences between a measured position and a reference position and between the time derivatives of the measured position and the time derivatives of the reference position for each respective link. The weights used in the determination of the error signal can be determined from the constant coefficients of the dynamic model. The error signal can be applied in a closed negative feedback control loop to diminish or eliminate the error signal for each respective link.
Wan, Hui; Rasch, Philip J.; Zhang, Kai; Qian, Yun; Yan, Huiping; Zhao, Chun
2014-09-08
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.
In silico method for modelling metabolism and gene product expression at genome scale
Lerman, Joshua A.; Hyduke, Daniel R.; Latif, Haythem; Portnoy, Vasiliy A.; Lewis, Nathan E.; Orth, Jeffrey D.; Rutledge, Alexandra C.; Smith, Richard D.; Adkins, Joshua N.; Zengler, Karsten; Palsson, Bernard O.
2012-07-03
Transcription and translation use raw materials and energy generated metabolically to create the macromolecular machinery responsible for all cellular functions, including metabolism. A biochemically accurate model of molecular biology and metabolism will facilitate comprehensive and quantitative computations of an organism's molecular constitution as a function of genetic and environmental parameters. Here we formulate a model of metabolism and macromolecular expression. Prototyping it using the simple microorganism Thermotoga maritima, we show our model accurately simulates variations in cellular composition and gene expression. Moreover, through in silico comparative transcriptomics, the model allows the discovery of new regulons and improving the genome and transcription unit annotations. Our method presents a framework for investigating molecular biology and cellular physiology in silico and may allow quantitative interpretation of multi-omics data sets in the context of an integrated biochemical description of an organism.
Boscá, A.; Pedrós, J.; Martínez, J.; Calle, F.
2015-01-28
Due to its intrinsic high mobility, graphene has proved to be a suitable material for high-speed electronics, where graphene field-effect transistor (GFET) has shown excellent properties. In this work, we present a method for extracting relevant electrical parameters from GFET devices using a simple electrical characterization and a model fitting. With experimental data from the device output characteristics, the method allows to calculate parameters such as the mobility, the contact resistance, and the fixed charge. Differentiated electron and hole mobilities and direct connection with intrinsic material properties are some of the key aspects of this method. Moreover, the method output values can be correlated with several issues during key fabrication steps such as the graphene growth and transfer, the lithographic steps, or the metalization processes, providing a flexible tool for quality control in GFET fabrication, as well as a valuable feedback for improving the material-growth process.
High-order continuum kinetic method for modeling plasma dynamics in phase space
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Vogman, G. V.; Colella, P.; Shumlak, U.
2014-12-15
Continuum methods offer a high-fidelity means of simulating plasma kinetics. While computationally intensive, these methods are advantageous because they can be cast in conservation-law form, are not susceptible to noise, and can be implemented using high-order numerical methods. Advances in continuum method capabilities for modeling kinetic phenomena in plasmas require the development of validation tools in higher dimensional phase space and an ability to handle non-cartesian geometries. To that end, a new benchmark for validating Vlasov-Poisson simulations in 3D (x,vx,vy) is presented. The benchmark is based on the Dory-Guest-Harris instability and is successfully used to validate a continuum finite volumemore » algorithm. To address challenges associated with non-cartesian geometries, unique features of cylindrical phase space coordinates are described. Preliminary results of continuum kinetic simulations in 4D (r,z,vr,vz) phase space are presented.« less
Wang, Shaobu; Lu, Shuai; Zhou, Ning; Lin, Guang; Elizondo, Marcelo A.; Pai, M. A.
2014-09-04
In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal basis of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.
Multiscale twin hierarchy in NiMnGa shape memory alloys with Fe and Cu
Office of Scientific and Technical Information (OSTI)
(Journal Article) | DOE PAGES Multiscale twin hierarchy in NiMnGa shape memory alloys with Fe and Cu This content will become publicly available on April 30, 2017 « Prev Next » Title: Multiscale twin hierarchy in NiMnGa shape memory alloys with Fe and Cu Authors: Barabash, Rozaliya I. ; Barabash, Oleg M. ; Popov, Dmitry ; Shen, Guoyin ; Park, Changyong ; Yang, Wenge Publication Date: 2015-04-01 OSTI Identifier: 1250957 Grant/Contract Number: FG02-99ER45775; NA0001974 Type: Publisher's
"Multiscale Capabilities for Exploring Transport Phenomena in Batteries":
Office of Scientific and Technical Information (OSTI)
Ab Initio Calculations on Defective LiFePO4 (Technical Report) | SciTech Connect "Multiscale Capabilities for Exploring Transport Phenomena in Batteries": Ab Initio Calculations on Defective LiFePO4 Citation Details In-Document Search Title: "Multiscale Capabilities for Exploring Transport Phenomena in Batteries": Ab Initio Calculations on Defective LiFePO4 Authors: Kanai, Y ; Tang, M ; Wood, B C Publication Date: 2013-10-25 OSTI Identifier: 1113363 Report Number(s):
Modeling and Evaluation of Geophysical Methods for Monitoring and Tracking CO2 Migration
Daniels, Jeff
2012-11-30
Geological sequestration has been proposed as a viable option for mitigating the vast amount of CO{sub 2} being released into the atmosphere daily. Test sites for CO{sub 2} injection have been appearing across the world to ascertain the feasibility of capturing and sequestering carbon dioxide. A major concern with full scale implementation is monitoring and verifying the permanence of injected CO{sub 2}. Geophysical methods, an exploration industry standard, are non-invasive imaging techniques that can be implemented to address that concern. Geophysical methods, seismic and electromagnetic, play a crucial role in monitoring the subsurface pre- and post-injection. Seismic techniques have been the most popular but electromagnetic methods are gaining interest. The primary goal of this project was to develop a new geophysical tool, a software program called GphyzCO2, to investigate the implementation of geophysical monitoring for detecting injected CO{sub 2} at test sites. The GphyzCO2 software consists of interconnected programs that encompass well logging, seismic, and electromagnetic methods. The software enables users to design and execute 3D surface-to-surface (conventional surface seismic) and borehole-to-borehole (cross-hole seismic and electromagnetic methods) numerical modeling surveys. The generalized flow of the program begins with building a complex 3D subsurface geological model, assigning properties to the models that mimic a potential CO{sub 2} injection site, numerically forward model a geophysical survey, and analyze the results. A test site located in Warren County, Ohio was selected as the test site for the full implementation of GphyzCO2. Specific interest was placed on a potential reservoir target, the Mount Simon Sandstone, and cap rock, the Eau Claire Formation. Analysis of the test site included well log data, physical property measurements (porosity), core sample resistivity measurements, calculating electrical permittivity values, seismic data collection, and seismic interpretation. The data was input into GphyzCO2 to demonstrate a full implementation of the software capabilities. Part of the implementation investigated the limits of using geophysical methods to monitor CO{sub 2} injection sites. The results show that cross-hole EM numerical surveys are limited to under 100 meter borehole separation. Those results were utilized in executing numerical EM surveys that contain hypothetical CO{sub 2} injections. The outcome of the forward modeling shows that EM methods can detect the presence of CO{sub 2}.
Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.
2010-05-04
A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.
Yunovich, M.; Thompson, N.G.
1998-12-31
During the past fifteen years corrosion inhibiting admixtures (CIAs) have become increasingly popular for protection of reinforced components of highway bridges and other structures from damage induced by chlorides. However, there remains considerable debate about the benefits of CIAs in concrete. A variety of testing methods to assess the performance of CIA have been reported in the literature, ranging from tests in simulated pore solutions to long-term exposures of concrete slabs. The paper reviews the published techniques and recommends the methods which would make up a comprehensive CIA effectiveness testing program. The results of this set of tests would provide the data which can be used to rank the presently commercially available CIA and future candidate formulations utilizing a proposed predictive model. The model is based on relatively short-term laboratory testing and considers several phases of a service life of a structure (corrosion initiation, corrosion propagation without damage, and damage to the structure).
An efficient modeling method for thermal stratification simulation in a BWR suppression pool
Haihua Zhao; Ling Zou; Hongbin Zhang; Hua Li; Walter Villanueva; Pavel Kudinov
2012-09-01
The suppression pool in a BWR plant not only is the major heat sink within the containment system, but also provides major emergency cooling water for the reactor core. In several accident scenarios, such as LOCA and extended station blackout, thermal stratification tends to form in the pool after the initial rapid venting stage. Accurately predicting the pool stratification phenomenon is important because it affects the peak containment pressure; and the pool temperature distribution also affects the NPSHa (Available Net Positive Suction Head) and therefore the performance of the pump which draws cooling water back to the core. Current safety analysis codes use 0-D lumped parameter methods to calculate the energy and mass balance in the pool and therefore have large uncertainty in prediction of scenarios in which stratification and mixing are important. While 3-D CFD methods can be used to analyze realistic 3D configurations, these methods normally require very fine grid resolution to resolve thin substructures such as jets and wall boundaries, therefore long simulation time. For mixing in stably stratified large enclosures, the BMIX++ code has been developed to implement a highly efficient analysis method for stratification where the ambient fluid volume is represented by 1-D transient partial differential equations and substructures such as free or wall jets are modeled with 1-D integral models. This allows very large reductions in computational effort compared to 3-D CFD modeling. The POOLEX experiments at Finland, which was designed to study phenomena relevant to Nordic design BWR suppression pool including thermal stratification and mixing, are used for validation. GOTHIC lumped parameter models are used to obtain boundary conditions for BMIX++ code and CFD simulations. Comparison between the BMIX++, GOTHIC, and CFD calculations against the POOLEX experimental data is discussed in detail.
Mathematical and computational modeling of the diffraction problems by discrete singularities method
Nesvit, K. V.
2014-11-12
The main objective of this study is reduced the boundary-value problems of scattering and diffraction waves on plane-parallel structures to the singular or hypersingular integral equations. For these cases we use a method of the parametric representations of the integral and pseudo-differential operators. Numerical results of the model scattering problems on periodic and boundary gratings and also on the gratings above a flat screen reflector are presented in this paper.
Shi, Xing; Lin, Guang; Zou, Jianfeng; Fedosov, Dmitry A.
2013-07-20
To model red blood cell (RBC) deformation in flow, the recently developed LBM-DLM/FD method ([Shi and Lim, 2007)29], derived from the lattice Boltzmann method and the distributed Lagrange multiplier/fictitious domain methodthe fictitious domain method, is extended to employ the mesoscopic network model for simulations of red blood cell deformation. The flow is simulated by the lattice Boltzmann method with an external force, while the network model is used for modeling red blood cell deformation and the fluid-RBC interaction is enforced by the Lagrange multiplier. To validate parameters of the RBC network model, sThe stretching numerical tests on both coarse and fine meshes are performed and compared with the corresponding experimental data to validate the parameters of the RBC network model. In addition, RBC deformation in pipe flow and in shear flow is simulated, revealing the capacity of the current method for modeling RBC deformation in various flows.
Crystal Plasticity Model of Reactor Pressure Vessel Embrittlement in GRIZZLY
Chakraborty, Pritam; Biner, Suleyman Bulent; Zhang, Yongfeng; Spencer, Benjamin Whiting
2015-07-01
The integrity of reactor pressure vessels (RPVs) is of utmost importance to ensure safe operation of nuclear reactors under extended lifetime. Microstructure-scale models at various length and time scales, coupled concurrently or through homogenization methods, can play a crucial role in understanding and quantifying irradiation-induced defect production, growth and their influence on mechanical behavior of RPV steels. A multi-scale approach, involving atomistic, meso- and engineering-scale models, is currently being pursued within the GRIZZLY project to understand and quantify irradiation-induced embrittlement of RPV steels. Within this framework, a dislocation-density based crystal plasticity model has been developed in GRIZZLY that captures the effect of irradiation-induced defects on the flow stress behavior and is presented in this report. The present formulation accounts for the interaction between self-interstitial loops and matrix dislocations. The model predictions have been validated with experiments and dislocation dynamics simulation.
A Novel method for modeling the recoil in W boson events at hadron collider
Abazov, Victor Mukhamedovich; Abbott, Braden Keim; Abolins, Maris A.; Acharya, Bannanje Sripath; Adams, Mark Raymond; Adams, Todd; Aguilo, Ernest; Ahsan, Mahsana; Alexeev, Guennadi D.; Alkhazov, Georgiy D.; Alton, Andrew K.; /Michigan U. /Augustana Coll., Sioux Falls /Northeastern U.
2009-07-01
We present a new method for modeling the hadronic recoil in W {yields} {ell}{nu} events produced at hadron colliders. The recoil is chosen from a library of recoils in Z {yields} {ell}{ell} data events and overlaid on a simulated W {yields} {ell}{nu} event. Implementation of this method requires that the data recoil library describe the properties of the measured recoil as a function of the true, rather than the measured, transverse momentum of the boson. We address this issue using a multidimensional Bayesian unfolding technique. We estimate the statistical and systematic uncertainties from this method for the W boson mass and width measurements assuming 1 fb{sup -1} of data from the Fermilab Tevatron. The uncertainties are found to be small and comparable to those of a more traditional parameterized recoil model. For the high precision measurements that will be possible with data from Run II of the Fermilab Tevatron and from the CERN LHC, the method presented in this paper may be advantageous, since it does not require an understanding of the measured recoil from first principles.
Simulation of Thermal Stratification in BWR Suppression Pools with One Dimensional Modeling Method
Haihua Zhao; Ling Zou; Hongbin Zhang
2014-01-01
The suppression pool in a boiling water reactor (BWR) plant not only is the major heat sink within the containment system, but also provides the major emergency cooling water for the reactor core. In several accident scenarios, such as a loss-of-coolant accident and extended station blackout, thermal stratification tends to form in the pool after the initial rapid venting stage. Accurately predicting the pool stratification phenomenon is important because it affects the peak containment pressure; the pool temperature distribution also affects the NPSHa (available net positive suction head) and therefore the performance of the Emergency Core Cooling System and Reactor Core Isolation Cooling System pumps that draw cooling water back to the core. Current safety analysis codes use zero dimensional (0-D) lumped parameter models to calculate the energy and mass balance in the pool; therefore, they have large uncertainties in the prediction of scenarios in which stratification and mixing are important. While three-dimensional (3-D) computational fluid dynamics (CFD) methods can be used to analyze realistic 3-D configurations, these methods normally require very fine grid resolution to resolve thin substructures such as jets and wall boundaries, resulting in a long simulation time. For mixing in stably stratified large enclosures, the BMIX++ code (Berkeley mechanistic MIXing code in C++) has been developed to implement a highly efficient analysis method for stratification where the ambient fluid volume is represented by one-dimensional (1-D) transient partial differential equations and substructures (such as free or wall jets) are modeled with 1-D integral models. This allows very large reductions in computational effort compared to multi-dimensional CFD modeling. One heat-up experiment performed at the Finland POOLEX facility, which was designed to study phenomena relevant to Nordic design BWR suppression pool including thermal stratification and mixing, is used for validation. Comparisons between the BMIX++, GOTHIC, and CFD calculations against the POOLEX experimental data are discussed in detail.
Plys, Martin; Burelbach, James; Lee, Sung Jin; Apthorpe, Robert
2013-07-01
A unified modeling method applicable to the processing, shipping, and storage of spent nuclear fuel and sludge has been incrementally developed, validated, and applied over a period of about 15 years at the US DOE Hanford site. The software, FATE{sup TM}, provides a consistent framework for a wide dynamic range of common DOE and commercial fuel and waste applications. It has been used during the design phase, for safety and licensing calculations, and offers a graded approach to complex modeling problems encountered at DOE facilities and abroad (e.g., Sellafield). FATE has also been used for commercial power plant evaluations including reactor building fire modeling for fire PRA, evaluation of hydrogen release, transport, and flammability for post-Fukushima vulnerability assessment, and drying of commercial oxide fuel. FATE comprises an integrated set of models for fluid flow, aerosol and contamination release, transport, and deposition, thermal response including chemical reactions, and evaluation of fire and explosion hazards. It is one of few software tools that combine both source term and thermal-hydraulic capability. Practical examples are described below, with consideration of appropriate model complexity and validation. (authors)
Zhen, Yi; Zhang, Xinyuan; Wang, Ningli E-mail: puj@upmc.edu; Gu, Suicheng; Meng, Xin; Zheng, Bin; Pu, Jiantao E-mail: puj@upmc.edu
2014-09-15
Purpose: A novel algorithm is presented to automatically identify the retinal vessels depicted in color fundus photographs. Methods: The proposed algorithm quantifies the contrast of each pixel in retinal images at multiple scales and fuses the resulting consequent contrast images in a progressive manner by leveraging their spatial difference and continuity. The multiscale strategy is to deal with the variety of retinal vessels in width, intensity, resolution, and orientation; and the progressive fusion is to combine consequent images and meanwhile avoid a sudden fusion of image noise and/or artifacts in space. To quantitatively assess the performance of the algorithm, we tested it on three publicly available databases, namely, DRIVE, STARE, and HRF. The agreement between the computer results and the manual delineation in these databases were quantified by computing their overlapping in both area and length (centerline). The measures include sensitivity, specificity, and accuracy. Results: For the DRIVE database, the sensitivities in identifying vessels in area and length were around 90% and 70%, respectively, the accuracy in pixel classification was around 99%, and the precisions in terms of both area and length were around 94%. For the STARE database, the sensitivities in identifying vessels were around 90% in area and 70% in length, and the accuracy in pixel classification was around 97%. For the HRF database, the sensitivities in identifying vessels were around 92% in area and 83% in length for the healthy subgroup, around 92% in area and 75% in length for the glaucomatous subgroup, around 91% in area and 73% in length for the diabetic retinopathy subgroup. For all three subgroups, the accuracy was around 98%. Conclusions: The experimental results demonstrate that the developed algorithm is capable of identifying retinal vessels depicted in color fundus photographs in a relatively reliable manner.
Model based approach to UXO imaging using the time domain electromagnetic method
Lavely, E.M.
1999-04-01
Time domain electromagnetic (TDEM) sensors have emerged as a field-worthy technology for UXO detection in a variety of geological and environmental settings. This success has been achieved with commercial equipment that was not optimized for UXO detection and discrimination. The TDEM response displays a rich spatial and temporal behavior which is not currently utilized. Therefore, in this paper the author describes a research program for enhancing the effectiveness of the TDEM method for UXO detection and imaging. Fundamental research is required in at least three major areas: (a) model based imaging capability i.e. the forward and inverse problem, (b) detector modeling and instrument design, and (c) target recognition and discrimination algorithms. These research problems are coupled and demand a unified treatment. For example: (1) the inverse solution depends on solution of the forward problem and knowledge of the instrument response; (2) instrument design with improved diagnostic power requires forward and inverse modeling capability; and (3) improved target recognition algorithms (such as neural nets) must be trained with data collected from the new instrument and with synthetic data computed using the forward model. Further, the design of the appropriate input and output layers of the net will be informed by the results of the forward and inverse modeling. A more fully developed model of the TDEM response would enable the joint inversion of data collected from multiple sensors (e.g., TDEM sensors and magnetometers). Finally, the author suggests that a complementary approach to joint inversions is the statistical recombination of data using principal component analysis. The decomposition into principal components is useful since the first principal component contains those features that are most strongly correlated from image to image.
MacAlpine, Sara; Deline, Chris
2015-09-15
It is often difficult to model the effects of partial shading conditions on PV array performance, as shade losses are nonlinear and depend heavily on a system's particular configuration. This work describes and implements a simple method for modeling shade loss: a database of shade impact results (loss percentages), generated using a validated, detailed simulation tool and encompassing a wide variety of shading scenarios. The database is intended to predict shading losses in crystalline silicon PV arrays and is accessed using basic inputs generally available in any PV simulation tool. Performance predictions using the database are within 1-2% of measured data for several partially shaded PV systems, and within 1% of those predicted by the full, detailed simulation tool on an annual basis. The shade loss database shows potential to considerably improve performance prediction for partially shaded PV systems.
Toni Smithl; Lyudmila V. Slipchenko; Mark S. Gordon
2008-02-27
This study compares the results of the general effective fragment potential (EFP2) method to the results of a previous combined coupled cluster with single, double, and perturbative triple excitations [CCSD(T)] and symmetry-adapted perturbation theory (SAPT) study [Sinnokrot and Sherrill, J. Am. Chem. Soc., 2004, 126, 7690] on substituent effects in {pi}-{pi} interactions. EFP2 is found to accurately model the binding energies of the benzene-benzene, benzene-phenol, benzene-toluene, benzene-fluorobenzene, and benzene-benzonitrile dimers, as compared with high-level methods [Sinnokrot and Sherrill, J. Am. Chem. Soc., 2004, 126, 7690], but at a fraction of the computational cost of CCSD(T). In addition, an EFP-based Monte Carlo/simulated annealing study was undertaken to examine the potential energy surface of the substituted dimers.
Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2
Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.
1983-05-01
As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.
High-order continuum kinetic method for modeling plasma dynamics in phase space
Vogman, G. V.; Colella, P.; Shumlak, U.
2014-12-15
Continuum methods offer a high-fidelity means of simulating plasma kinetics. While computationally intensive, these methods are advantageous because they can be cast in conservation-law form, are not susceptible to noise, and can be implemented using high-order numerical methods. Advances in continuum method capabilities for modeling kinetic phenomena in plasmas require the development of validation tools in higher dimensional phase space and an ability to handle non-cartesian geometries. To that end, a new benchmark for validating Vlasov-Poisson simulations in 3D (x,v_{x},v_{y}) is presented. The benchmark is based on the Dory-Guest-Harris instability and is successfully used to validate a continuum finite volume algorithm. To address challenges associated with non-cartesian geometries, unique features of cylindrical phase space coordinates are described. Preliminary results of continuum kinetic simulations in 4D (r,z,v_{r},v_{z}) phase space are presented.
A hybrid transport-diffusion model for radiative transfer in absorbing and scattering media
Roger, M.; Caliot, C.; Crouseilles, N.; Coelho, P.J.
2014-10-15
A new multi-scale hybrid transport-diffusion model for radiative transfer is proposed in order to improve the efficiency of the calculations close to the diffusive regime, in absorbing and strongly scattering media. In this model, the radiative intensity is decomposed into a macroscopic component calculated by the diffusion equation, and a mesoscopic component. The transport equation for the mesoscopic component allows to correct the estimation of the diffusion equation, and then to obtain the solution of the linear radiative transfer equation. In this work, results are presented for stationary and transient radiative transfer cases, in examples which concern solar concentrated and optical tomography applications. The Monte Carlo and the discrete-ordinate methods are used to solve the mesoscopic equation. It is shown that the multi-scale model allows to improve the efficiency of the calculations when the medium is close to the diffusive regime. The proposed model is a good alternative for radiative transfer at the intermediate regime where the macroscopic diffusion equation is not accurate enough and the radiative transfer equation requires too much computational effort.
Fix, N. J.
2008-01-31
The purpose of the project is to conduct research at an Integrated Field-Scale Research Challenge Site in the Hanford Site 300 Area, CERCLA OU 300-FF-5 (Figure 1), to investigate multi-scale mass transfer processes associated with a subsurface uranium plume impacting both the vadose zone and groundwater. The project will investigate a series of science questions posed for research related to the effect of spatial heterogeneities, the importance of scale, coupled interactions between biogeochemical, hydrologic, and mass transfer processes, and measurements/approaches needed to characterize a mass-transfer dominated system. The research will be conducted by evaluating three (3) different hypotheses focused on multi-scale mass transfer processes in the vadose zone and groundwater, their influence on field-scale U(VI) biogeochemistry and transport, and their implications to natural systems and remediation. The project also includes goals to 1) provide relevant materials and field experimental opportunities for other ERSD researchers and 2) generate a lasting, accessible, and high-quality field experimental database that can be used by the scientific community for testing and validation of new conceptual and numerical models of subsurface reactive transport.
CASL - Los Alamos National Laboratory
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Science policy Key Contributions Materials science and multiscale leadership Models and numerical methods leadership Advanced computational architectures Advanced code...
Shell model method for Gamow-Teller transitions in heavy, deformed nuclei
Gao Zaochun [Joint Institute for Nuclear Astrophysics and Department of Physics, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, Michigan 48824 (United States); China Institute of Atomic Energy, P.O. Box 275 (18), Beijing 102413 (China); Sun Yang [Joint Institute for Nuclear Astrophysics and Department of Physics, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Chen, Y.-S. [China Institute of Atomic Energy, P.O. Box 275(18), Beijing 102413 (China); Institute of Theoretical Physics, Academia Sinica, Beijing 100080 (China)
2006-11-15
A method for calculation of Gamow-Teller transition rates is developed by using the concept of the Projected Shell Model (PSM). The shell model basis is constructed by superimposing angular-momentum-projected multiquasiparticle configurations, and nuclear wave functions are obtained by diagonalizing the two-body interactions in these projected states. Calculation of transition matrix elements in the PSM framework is discussed in detail, and the effects caused by the Gamow-Teller residual forces and by configuration-mixing are studied. With this method, it may become possible to perform a state-by-state calculation for {beta}-decay and electron-capture rates in heavy, deformed nuclei at finite temperatures. Our first example indicates that, while experimentally known Gamow-Teller transition rates from the ground state of the parent nucleus are reproduced, stronger transitions from some low-lying excited states are predicted to occur, which may considerably enhance the total decay rates once these nuclei are exposed to hot stellar environments.
Corcelli, S.A.; Kress, J.D.; Pratt, L.R.
1995-08-07
This paper develops and characterizes mixed direct-iterative methods for boundary integral formulations of continuum dielectric solvation models. We give an example, the Ca{sup ++}{hor_ellipsis}Cl{sup {minus}} pair potential of mean force in aqueous solution, for which a direct solution at thermal accuracy is difficult and, thus for which mixed direct-iterative methods seem necessary to obtain the required high resolution. For the simplest such formulations, Gauss-Seidel iteration diverges in rare cases. This difficulty is analyzed by obtaining the eigenvalues and the spectral radius of the non-symmetric iteration matrix. This establishes that those divergences are due to inaccuracies of the asymptotic approximations used in evaluation of the matrix elements corresponding to accidental close encounters of boundary elements on different atomic spheres. The spectral radii are then greater than one for those diverging cases. This problem is cured by checking for boundary element pairs closer than the typical spatial extent of the boundary elements and for those cases performing an ``in-line`` Monte Carlo integration to evaluate the required matrix elements. These difficulties are not expected and have not been observed for the thoroughly coarsened equations obtained when only a direct solution is sought. Finally, we give an example application of hybrid quantum-classical methods to deprotonation of orthosilicic acid in water.
Michael R Tonks; Yongfeng Zhang; Xianming Bai
2014-06-01
This report summarizes development work funded by the Nuclear Energy Advanced Modeling Simulation program's Fuels Product Line (FPL) to develop a mechanistic model for the average grain size in UO? fuel. The model is developed using a multiscale modeling and simulation approach involving atomistic simulations, as well as mesoscale simulations using INL's MARMOT code.
Taylor, G.; Dong, C.; Sun, S.
2010-03-18
A mathematical model for contaminant species passing through fractured porous media is presented. In the numerical model, we combine two locally conservative methods, i.e. mixed finite element (MFE) and the finite volume methods. Adaptive triangle mesh is used for effective treatment of the fractures. A hybrid MFE method is employed to provide an accurate approximation of velocities field for both the fractures and matrix which are crucial to the convection part of the transport equation. The finite volume method and the standard MFE method are used to approximate the convection and dispersion terms respectively. The model is used to investigate the interaction of adsorption with transport and to extract information on effective adsorption distribution coefficients. Numerical examples in different fractured media illustrate the robustness and efficiency of the proposed numerical model.
Commercial Implementation of Model-Based Manufacturing of Nanostructured Metals
Lowe, Terry C.
2012-07-24
Computational modeling is an essential tool for commercial production of nanostructured metals. Strength is limited by imperfections at the high strength levels that are achievable in nanostructured metals. Processing to achieve homogeneity at the micro- and nano-scales is critical. Manufacturing of nanostructured metals is intrinsically a multi-scale problem. Manufacturing of nanostructured metal products requires computer control, monitoring and modeling. Large scale manufacturing of bulk nanostructured metals by Severe Plastic Deformation is a multi-scale problem. Computational modeling at all scales is essential. Multiple scales of modeling must be integrated to predict and control nanostructural, microstructural, macrostructural product characteristics and production processes.
Lindquist, W. Brent; Jones, Keith W.; Um, Wooyong; Rockhold, mark; Peters, Catherine A.; Celia, Michael A.
2013-02-15
This project addressed the scaling of geochemical reactions to core and field scales, and the interrelationship between reaction rates and flow in porous media. We targeted reactive transport problems relevant to the Hanford site ? specifically the reaction of highly caustic, radioactive waste solutions with subsurface sediments, and the immobilization of 90Sr and 129I through mineral incorporation and passive flow blockage, respectively. We addressed the correlation of results for pore-scale fluid-soil interaction with field-scale fluid flow, with the specific goals of (i) predicting attenuation of radionuclide concentration; (ii) estimating changes in flow rates through changes of soil permeabilities; and (iii) estimating effective reaction rates. In supplemental work, we also simulated reactive transport systems relevant to geologic carbon sequestration. As a whole, this research generated a better understanding of reactive transport in porous media, and resulted in more accurate methods for reaction rate upscaling and improved prediction of permeability evolution. These scientific advancements will ultimately lead to better tools for management and remediation of DOEs legacy waste problems. We established three key issues of reactive flow upscaling, and organized this project in three corresponding thrust areas. 1) Reactive flow experiments. The combination of mineral dissolution and precipitation alters pore network structure and the subsequent flow velocities, thereby creating a complex interaction between reaction and transport. To examine this phenomenon, we conducted controlled laboratory experimentation using reactive flow-through columns. ? Results and Key Findings: Four reactive column experiments (S1, S3, S4, S5) have been completed in which simulated tank waste leachage (STWL) was reacted with pure quartz sand, with and without Aluminum. The STWL is a caustic solution that dissolves quartz. Because Al is a necessary element in the formation of secondary mineral precipitates (cancrinite), conducting experiments under conditions with and without Al allowed us to experimentally separate the conditions that lead to quartz dissolution from the conditions that lead to quartz dissolution plus cancrinite precipitation. Consistent with our expectations, in the experiments without Al, there was a substantial reduction in volume of the solid matrix. With Al there was a net increase in the volume of the solid matrix. The rate and extent of reaction was found to increase with temperature. These results demonstrate a successful effort to identify conditions that lead to increases and conditions that lead to decreases in solid matrix volume due to reactions of caustic tank wastes with quartz sands. In addition, we have begun to work with slightly larger, intermediate-scale columns packed with Hanford natural sediments and quartz. Similar dissolution and precipitation were observed in these colums. The measurements are being interpreted with reactive transport modeling using STOMP; preliminary observations are reported here. 2) Multi-Scale Imaging and Analysis. Mineral dissolution and precipitation rates within a porous medium will be different in different pores due to natural heterogeneity and the heterogeneity that is created from the reactions themselves. We used a combination of X-ray computed microtomography, backscattered electron and energy dispersive X-ray spectroscopy combined with computational image analysis to quantify pore structure, mineral distribution, structure changes and fluid-air and fluid-grain interfaces. ? Results and Key Findings: Three of the columns from the reactive flow experiments at PNNL (S1, S3, S4) were imaged using 3D X-ray computed microtomography (XCMT) at BNL and analyzed using 3DMA-rock at SUNY Stony Brook. The imaging results support the mass balance findings reported by Dr. Ums group, regarding the substantial dissolution of quartz in column S1. An important observation is that of grain movement accompanying dissolution in the unconsolidated media. The resultant movement
Kumar, Aditya; Shi, Ruijie; Kumar, Rajeeva; Dokucu, Mustafa
2013-04-09
Control system and method for controlling an integrated gasification combined cycle (IGCC) plant are provided. The system may include a controller coupled to a dynamic model of the plant to process a prediction of plant performance and determine a control strategy for the IGCC plant over a time horizon subject to plant constraints. The control strategy may include control functionality to meet a tracking objective and control functionality to meet an optimization objective. The control strategy may be configured to prioritize the tracking objective over the optimization objective based on a coordinate transformation, such as an orthogonal or quasi-orthogonal projection. A plurality of plant control knobs may be set in accordance with the control strategy to generate a sequence of coordinated multivariable control inputs to meet the tracking objective and the optimization objective subject to the prioritization resulting from the coordinate transformation.
Adaptive Forward Modeling Method for Analysis and Reconstructions of Orientation Image Map
Energy Science and Technology Software Center (OSTI)
2014-06-01
IceNine is a MPI-parallel orientation reconstruction and microstructure analysis code. It's primary purpose is to reconstruct a spatially resolved orientation map given a set of diffraction images from a high energy x-ray diffraction microscopy (HEDM) experiment (1). In particular, IceNine implements the adaptive version of the forward modeling method (2, 3). Part of IceNine is a library used to for conbined analysis of the microstructure with the experimentally measured diffraction signal. The libraries is alsomore » designed for tapid prototyping of new reconstruction and analysis algorithms. IceNine is also built with a simulator of diffraction images with an input microstructure.« less
Signal processing Model/Method for Recovering Acoustic Reflectivity of Spot Weld
Energy Science and Technology Software Center (OSTI)
2005-09-08
Until recently, U.S. auto manufacturers have inspected the veracity of welds in the auto bodies they build by using destructive tear-down, which typically results in more than $1 M of scrappage per plant per year. Much of this expense could possibly be avoided with a nondestructive technique (and 100% instead of 1% inspection could be achieved). Recent advances in ultrasound probes promise to provide a sufficiently accurate non-destructive evaluation technique, but the necessary signal processingmore » has not yet been developed. This disclosure describes a signal processing model and method useful for diagnosing the veracity of spot welds between two sheets of the same thickness from ultrasound signals Standard systems theory describes a signal as a convolution of a transducer function, h(t), and an impulse train (beta(t), tau(t)) [1] (see Eq. (1) attached). With a Gaussian wavelet as a transducer function, this model describes the signal from an ultrasound probe quite well, and the literature provides many methods for "deconvolution," for recovery of the impulse train from the signal [see e.g., 2-3]. What is novel about the technique disclosed is the model that describes the impulse train as a function of reflectivity, the share of energy incident on the interface that is reflected, and that allows the recovery of its estimated value. The reflectivity estimate provides an ideal indicator of weld veracity, compressing each signal into a single value between 0 and 1, which can then be displayed as a 2d greyscale or colormap of the weld. The model describing the system is attached as Eqs. (2). These equations account for the energy in the probe-side and opposite sheets. In each period, this energy is a sum of that reflected from the same sheet plus that transmitted from the opposite (dampened by material attenuation at rate a). This model is consistent with physical first principles (in particular the First and Second Laws of Thermodynamics) and has been verified empirically. For fast estimation of R using only observations beta(1, ..., T) a receiver state equation has been derived, and is attached as Eq. (3). This equation has the further advantage that the initial impulse S need not be known, rather it is estimated simultaneously. This is necessary because element failure and coupling can cause large variations in S. Constrained nonlinear least squares techniques can be applied to this equation to recover reflectivity (and initial impulse) [4]. In particular, the Gauss-Newton algorithm on the log of the sum of squared errors based on the receiver state equation is recommended. To summarize, it is the model described in Eqs. (2) and (3) that is novel, and that enables the recovery of acoustic reflectivity from the ultrasound signals. It has been verified that this reflectivity estimate provides a better indicator of weld veracity than other features previously derived from such signals.« less
L. Pan; Y. Seol; G. Bodvarsson
2004-04-29
The dual-continuum random-walk particle tracking approach is an attractive simulation method for simulating transport in a fractured porous medium. In order to be truly successful for such a model, however, the key issue is to properly simulate the mass transfer between the fracture and matrix continua. In a recent paper, Pan and Bodvarsson (2002) proposed an improved scheme for simulating fracture-matrix mass transfer, by introducing the concept of activity range into the calculation of fracture-matrix particle-transfer probability. By comparing with analytical solutions, they showed that their scheme successfully captured the transient diffusion depth into the matrix without any additional subgrid (matrix) cells. This technical note presents an expansion of their scheme to cases in which significant water flow through the fracture-matrix interface exists. The dual-continuum particle tracker with this new scheme was found to be as accurate as a numerical model using a more detailed grid. The improved scheme can be readily incorporated into the existing particle-tracking code, while still maintaining the advantage of needing no additional matrix cells to capture transient features of particle penetration into the matrix.
Langton, C.; Kosson, D.
2009-11-30
Cementitious barriers for nuclear applications are one of the primary controls for preventing or limiting radionuclide release into the environment. At the present time, performance and risk assessments do not fully incorporate the effectiveness of engineered barriers because the processes that influence performance are coupled and complicated. Better understanding the behavior of cementitious barriers is necessary to evaluate and improve the design of materials and structures used for radioactive waste containment, life extension of current nuclear facilities, and design of future nuclear facilities, including those needed for nuclear fuel storage and processing, nuclear power production and waste management. The focus of the Cementitious Barriers Partnership (CBP) literature review is to document the current level of knowledge with respect to: (1) mechanisms and processes that directly influence the performance of cementitious materials (2) methodologies for modeling the performance of these mechanisms and processes and (3) approaches to addressing and quantifying uncertainties associated with performance predictions. This will serve as an important reference document for the professional community responsible for the design and performance assessment of cementitious materials in nuclear applications. This review also provides a multi-disciplinary foundation for identification, research, development and demonstration of improvements in conceptual understanding, measurements and performance modeling that would be lead to significant reductions in the uncertainties and improved confidence in the estimating the long-term performance of cementitious materials in nuclear applications. This report identifies: (1) technology gaps that may be filled by the CBP project and also (2) information and computational methods that are in currently being applied in related fields but have not yet been incorporated into performance assessments of cementitious barriers. The various chapters contain both a description of the mechanism or and a discussion of the current approaches to modeling the phenomena.
Samala, Ravi K. Chan, Heang-Ping; Lu, Yao; Hadjiiski, Lubomir; Wei, Jun; Helvie, Mark A.; Sahiner, Berkman
2014-02-15
Purpose: Develop a computer-aided detection (CADe) system for clustered microcalcifications in digital breast tomosynthesis (DBT) volume enhanced with multiscale bilateral filtering (MSBF) regularization. Methods: With Institutional Review Board approval and written informed consent, two-view DBT of 154 breasts, of which 116 had biopsy-proven microcalcification (MC) clusters and 38 were free of MCs, was imaged with a General Electric GEN2 prototype DBT system. The DBT volumes were reconstructed with MSBF-regularized simultaneous algebraic reconstruction technique (SART) that was designed to enhance MCs and reduce background noise while preserving the quality of other tissue structures. The contrast-to-noise ratio (CNR) of MCs was further improved with enhancement-modulated calcification response (EMCR) preprocessing, which combined multiscale Hessian response to enhance MCs by shape and bandpass filtering to remove the low-frequency structured background. MC candidates were then located in the EMCR volume using iterative thresholding and segmented by adaptive region growing. Two sets of potential MC objects, cluster centroid objects and MC seed objects, were generated and the CNR of each object was calculated. The number of candidates in each set was controlled based on the breast volume. Dynamic clustering around the centroid objects grouped the MC candidates to form clusters. Adaptive criteria were designed to reduce false positive (FP) clusters based on the size, CNR values and the number of MCs in the cluster, cluster shape, and cluster based maximum intensity projection. Free-response receiver operating characteristic (FROC) and jackknife alternative FROC (JAFROC) analyses were used to assess the performance and compare with that of a previous study. Results: Unpaired two-tailedt-test showed a significant increase (p < 0.0001) in the ratio of CNRs for MCs with and without MSBF regularization compared to similar ratios for FPs. For view-based detection, a sensitivity of 85% was achieved at an FP rate of 2.16 per DBT volume. For case-based detection, a sensitivity of 85% was achieved at an FP rate of 0.85 per DBT volume. JAFROC analysis showed a significant improvement in the performance of the current CADe system compared to that of our previous system (p = 0.003). Conclusions: MBSF regularized SART reconstruction enhances MCs. The enhancement in the signals, in combination with properly designed adaptive threshold criteria, effective MC feature analysis, and false positive reduction techniques, leads to a significant improvement in the detection of clustered MCs in DBT.
Kramer, Sharlotte Lorraine Bolyard; Scherzinger, William M.
2014-09-01
The Virtual Fields Method (VFM) is an inverse method for constitutive model parameter identication that relies on full-eld experimental measurements of displacements. VFM is an alternative to standard approaches that require several experiments of simple geometries to calibrate a constitutive model. VFM is one of several techniques that use full-eld exper- imental data, including Finite Element Method Updating (FEMU) techniques, but VFM is computationally fast, not requiring iterative FEM analyses. This report describes the im- plementation and evaluation of VFM primarily for nite-deformation plasticity constitutive models. VFM was successfully implemented in MATLAB and evaluated using simulated FEM data that included representative experimental noise found in the Digital Image Cor- relation (DIC) optical technique that provides full-eld displacement measurements. VFM was able to identify constitutive model parameters for the BCJ plasticity model even in the presence of simulated DIC noise, demonstrating VFM as a viable alternative inverse method. Further research is required before VFM can be adopted as a standard method for constitu- tive model parameter identication, but this study is a foundation for ongoing research at Sandia for improving constitutive model calibration.
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Miller, Gregory H.; Forest, Gregory
2011-12-22
We present a new multiscale model for complex uids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic di#11;erential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a #12;nite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
A Multi-Methods Approach to HRA and Human Performance Modeling: A Field Assessment
Jacques Hugo; David I Gertman
2012-06-01
The Advanced Test Reactor (ATR) is a research reactor at the Idaho National Laboratory is primarily designed and used to test materials to be used in other, larger-scale and prototype reactors. The reactor offers various specialized systems and allows certain experiments to be run at their own temperature and pressure. The ATR Canal temporarily stores completed experiments and used fuel. It also has facilities to conduct underwater operations such as experiment examination or removal. In reviewing the ATR safety basis, a number of concerns were identified involving the ATR canal. A brief study identified ergonomic issues involving the manual handling of fuel elements in the canal that may increase the probability of human error and possible unwanted acute physical outcomes to the operator. In response to this concern, that refined the previous HRA scoping analysis by determining the probability of the inadvertent exposure of a fuel element to the air during fuel movement and inspection was conducted. The HRA analysis employed the SPAR-H method and was supplemented by information gained from a detailed analysis of the fuel inspection and transfer tasks. This latter analysis included ergonomics, work cycles, task duration, and workload imposed by tool and workplace characteristics, personal protective clothing, and operational practices that have the potential to increase physical and mental workload. Part of this analysis consisted of NASA-TLX analyses, combined with operational sequence analysis, computational human performance analysis (CHPA), and 3D graphical modeling to determine task failures and precursors to such failures that have safety implications. Experience in applying multiple analysis techniques in support of HRA methods is discussed.
Wang, Chao-Ying; Li, Chen-liang; Wu, Guo-Xun; Wang, Bao-Lai; Yang, Li-Jun; Zhao, Wei; Meng, Qing-Yuan
2014-01-28
The multi-scale simulation method is employed to investigate how defects affect the performances of Li-ion batteries (LIBs). The stable positions, binding energies and dynamics properties of Li impurity in Si with a 30 partial dislocation and stacking fault (SF) have been studied in comparison with the ideal crystal. It is found that the most table position is the tetrahedral (T{sub d}) site and the diffusion barrier is 0.63?eV in bulk Si. In the 30 partial dislocation core and SF region, the most stable positions are at the centers of the octagons (Oct-A and Oct-B) and pentahedron (site S), respectively. In addition, Li dopant may tend to congregate in these defects. The motion of Li along the dislocation core are carried out by the transport among the Oct-A (Oct-B) sites with the barrier of 1.93?eV (1.12?eV). In the SF region, the diffusion barrier of Li is 0.91?eV. These two types of defects may retard the fast migration of Li dopant that is finally trapped by them. Thus, the presence of the 30 partial dislocation and SF may deactivate the Li impurity and lead to low rate capability of LIB.
Haihua Zhao; Per F. Peterson
2010-10-01
Thermal mixing and stratification phenomena play major roles in the safety of reactor systems with large enclosures, such as containment safety in current fleet of LWRs, long-term passive containment cooling in Gen III+ plants including AP-1000 and ESBWR, the cold and hot pool mixing in pool type sodium cooled fast reactor systems (SFR), and reactor cavity cooling system behavior in high temperature gas cooled reactors (HTGR), etc. Depending on the fidelity requirement and computational resources, 0-D steady state models (heat transfer correlations), 0-D lumped parameter based transient models, 1-D physical-based coarse grain models, and 3-D CFD models are available. Current major system analysis codes either have no models or only 0-D models for thermal stratification and mixing, which can only give highly approximate results for simple cases. While 3-D CFD methods can be used to analyze simple configurations, these methods require very fine grid resolution to resolve thin substructures such as jets and wall boundaries. Due to prohibitive computational expenses for long transients in very large volumes, 3-D CFD simulations remain impractical for system analyses. For mixing in stably stratified large enclosures, UC Berkeley developed 1-D models basing on Zubers hierarchical two-tiered scaling analysis (HTTSA) method where the ambient fluid volume is represented by 1-D transient partial differential equations and substructures such as free or wall jets are modeled with 1-D integral models. This allows very large reductions in computational effort compared to 3-D CFD modeling. This paper will present an overview on important thermal mixing and stratification phenomena in large enclosures for different reactors, major modeling methods and their advantages and limits, potential paths to improve simulation capability and reduce analysis uncertainty in this area for advanced reactor system analysis tools.
Multi-Scale Characterization of Improved Algae Strains
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
... Three Objectives 1. Establish a pipeline for evaluating improved strains under ... Improvements in the model should permit better predictions and testing of outdoor ...
Multiscale analysis of nonlinear systems using computational homology
Konstantin Mischaikow; Michael Schatz; William Kalies; Thomas Wanner
2010-05-24
This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.
Multiscale analysis of nonlinear systems using computational homology
Konstantin Mischaikow, Rutgers University /Georgia Institute of Technology, Michael Schatz, Georgia Institute of Technology, William Kalies, Florida Atlantic University, Thomas Wanner,George Mason University
2010-05-19
This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.
EFFECTS OF PORE STRUCTURE CHANGE AND MULTI-SCALE HETEROGENEITY...
Office of Scientific and Technical Information (OSTI)
a better understanding of reactive transport in porous media, and resulted in more accurate methods for reaction rate upscaling and improved prediction of permeability evolution. ...
SEISMIC AND ROCK PHYSICS DIAGNOSTICS OF MULTISCALE RESERVOIR TEXTURES
Gary Mavko
2003-10-01
As part of our study on ''Relationships between seismic properties and rock microstructure'', we have (1) Studied relationships between velocity and permeability. (2) Used independent experimental methods to measure the elastic moduli of clay minerals as functions of pressure and saturation. (3) Applied different statistical methods for characterizing heterogeneity and textures from scanning acoustic microscope (SAM) images of shale microstructures. (4) Analyzed the directional dependence of velocity and attenuation in different reservoir rocks (5) Compared Vp measured under hydrostatic and non-hydrostatic stress conditions in sands. (6) Studied stratification as a source of intrinsic anisotropy in sediments using Vp and statistical methods for characterizing textures in sands.
Thornton, J.W.; McDowell, T.P.; Hughes, P.J.
1997-09-01
The results of five practical vertical ground heat exchanger sizing programs are compared against a detailed simulation model that has been calibrated to monitored data taken from one military family housing unit at Fort Polk, Louisiana. The calibration of the detailed model to data is described in a companion paper. The assertion that the data/detailed model is a useful benchmark for practical sizing methods is based on this calibration. The results from the comparisons demonstrate the current level of agreement between vertical ground heat exchanger sizing methods in common use. It is recommended that the calibration and comparison exercise be repeated with data sets from additional sites in order to build confidence in the practical sizing methods.
Phifer, Mark A.; Smith, Frank G. III
2013-06-21
A 3-D STOMP model has been developed for the Portsmouth On-Site Waste Disposal Facility (OSWDF) at Site D as outlined in Appendix K of FBP 2013. This model projects the flow and transport of the following radionuclides to various points of assessments: Tc-99, U-234, U-235, U-236, U-238, Am-241, Np-237, Pu-238, Pu-239, Pu-240, Th-228, and Th-230. The model includes the radioactive decay of these parents, but does not include the associated daughter ingrowth because the STOMP model does not have the capability to model daughter ingrowth. The Savannah River National Laboratory (SRNL) provides herein a recommended method to account for daughter ingrowth in association with the Portsmouth OSWDF Performance Assessment (PA) modeling.
Thermodynamic Development of Corrosion Rate Modeling in Iron Phosphate Glasses
Office of Scientific and Technical Information (OSTI)
Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP Reactor Concepts RD&D Dr. Thomas Downar University of Michigan Rob Versluis, Federal POC Hans Gougar, Technical POC Project No. 09-812 FINAL REPORT Project Title: Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP Covering Period: Final Date of Report: April 30, 2013 Recipient: University of Michigan 2355 Bonisteel Blvd Ann Arbor, MI 48109-2104 Contract Number:
Method for quantifying the prediction uncertainties associated with water quality models
Summers, J.K.; Wilson, H.T.; Kou, J.
1993-01-01
Many environmental regulatory agencies depend on models to organize, understand, and utilize the information for regulatory decision making. A general analytical protocol was developed to quantify prediction error associated with commonly used surface water quality models. Its application is demonstrated by comparing water quality models configured to represent different levels of spatial, temporal, and mechanistic complexity. This comparison can be accomplished by fitting the models to a benchmark data set. Once the models are successfully fitted to the benchmark data, the prediction errors associated with each application can be quantified using the Monte Carlo simulation techniques.
Xu, Zhijie; Li, Dongsheng; Xu, Wei; Devaraj, Arun; Colby, Robert J.; Thevuthasan, Suntharampillai; Geiser, B. P.; Larson, David J.
2015-04-01
In atom probe tomography (APT), accurate reconstruction of the spatial positions of field evaporated ions from measured detector patterns depends upon a correct understanding of the dynamic tip shape evolution and evaporation laws of component atoms. Artifacts in APT reconstructions of heterogeneous materials can be attributed to the assumption of homogeneous evaporation of all the elements in the material in addition to the assumption of a steady state hemispherical dynamic tip shape evolution. A level set method based specimen shape evolution model is developed in this study to simulate the evaporation of synthetic layered-structured APT tips. The simulation results of the shape evolution by the level set model qualitatively agree with the finite element method and the literature data using the finite difference method. The asymmetric evolving shape predicted by the level set model demonstrates the complex evaporation behavior of heterogeneous tip and the interface curvature can potentially lead to the artifacts in the APT reconstruction of such materials. Compared with other APT simulation methods, the new method provides smoother interface representation with the aid of the intrinsic sub-grid accuracy. Two evaporation models (linear and exponential evaporation laws) are implemented in the level set simulations and the effect of evaporation laws on the tip shape evolution is also presented.
Juxiu Tong; Bill X. Hu; Hai Huang; Luanjin Guo; Jinzhong Yang
2014-03-01
With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations, we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.
Change of variables as a method to study general ?-models: Bulk universality
Shcherbina, M.
2014-04-15
We consider ? matrix models with real analytic potentials. Assuming that the corresponding equilibrium density ? has a one-interval support (without loss of generality ? = [?2, 2]), we study the transformation of the correlation functions after the change of variables ?{sub i} ? ?(?{sub i}) with ?(?) chosen from the equation ?{sup ?}(?)?(?(?)) = ?{sub sc}(?), where ?{sub sc}(?) is the standard semicircle density. This gives us the deformed ?-model which has an additional interaction term. Standard transformation with the Gaussian integral allows us to show that the deformed ?-model may be reduced to the standard Gaussian ?-model with a small perturbation n{sup ?1}h(?). This reduces most of the problems of local and global regimes for ?-models to the corresponding problems for the Gaussian ?-model with a small perturbation. In the present paper, we prove the bulk universality of local eigenvalue statistics for both one-cut and multi-cut cases.
Numerical method to test a theoretical model of the quantum interferen...
Office of Scientific and Technical Information (OSTI)
A numerical method is provided to fit the experimental conductivity to the complicated conductivity expression for the quantum interference effect of Anderson localization. This ...
Multiscale Toxicology - Building the Next Generation Tools for Toxicology
Thrall, Brian D.; Minard, Kevin R.; Teeguarden, Justin G.; Waters, Katrina M.
2012-09-01
A Cooperative Research and Development Agreement (CRADA) was sponsored by Battelle Memorial Institute (Battelle, Columbus), to initiate a collaborative research program across multiple Department of Energy (DOE) National Laboratories aimed at developing a suite of new capabilities for predictive toxicology. Predicting the potential toxicity of emerging classes of engineered nanomaterials was chosen as one of two focusing problems for this program. PNNLs focus toward this broader goal was to refine and apply experimental and computational tools needed to provide quantitative understanding of nanoparticle dosimetry for in vitro cell culture systems, which is necessary for comparative risk estimates for different nanomaterials or biological systems. Research conducted using lung epithelial and macrophage cell models successfully adapted magnetic particle detection and fluorescent microscopy technologies to quantify uptake of various forms of engineered nanoparticles, and provided experimental constraints and test datasets for benchmark comparison against results obtained using an in vitro computational dosimetry model, termed the ISSD model. The experimental and computational approaches developed were used to demonstrate how cell dosimetry is applied to aid in interpretation of genomic studies of nanoparticle-mediated biological responses in model cell culture systems. The combined experimental and theoretical approach provides a highly quantitative framework for evaluating relationships between biocompatibility of nanoparticles and their physical form in a controlled manner.
SEISMIC AND ROCK PHYSICS DIAGNOSTICS OF MULTISCALE RESERVOIR TEXTURES
Gary Mavko
2003-06-30
As part of our study on ''Relationships between seismic properties and rock microstructure'', we have studied (1) Methods for detection of stress-induced velocity anisotropy in sands. (2) We have initiated efforts for velocity upscaling to quantify long-wavelength and short-wavelength velocity behavior and the scale-dependent dispersion caused by sediment variability in different depositional environments.
Application of Gaseous Sphere Injection Method for Modeling Under-expanded H2 Injection
Whitesides, R; Hessel, R P; Flowers, D L; Aceves, S M
2010-12-03
A methodology for modeling gaseous injection has been refined and applied to recent experimental data from the literature. This approach uses a discrete phase analogy to handle gaseous injection, allowing for addition of gaseous injection to a CFD grid without needing to resolve the injector nozzle. This paper focuses on model testing to provide the basis for simulation of hydrogen direct injected internal combustion engines. The model has been updated to be more applicable to full engine simulations, and shows good agreement with experiments for jet penetration and time-dependent axial mass fraction, while available radial mass fraction data is less well predicted.
Gering, Kevin L
2013-08-27
A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constant-current pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constant-current pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.
RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1
1995-08-01
The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.
Lesch, David A; Adriaan Sachtler, J.W. J.; Low, John J; Jensen, Craig M; Ozolins, Vidvuds; Siegel, Don
2011-02-14
UOP LLC, a Honeywell Company, Ford Motor Company, and Striatus, Inc., collaborated with Professor Craig Jensen of the University of Hawaii and Professor Vidvuds Ozolins of University of California, Los Angeles on a multi-year cost-shared program to discover novel complex metal hydrides for hydrogen storage. This innovative program combined sophisticated molecular modeling with high throughput combinatorial experiments to maximize the probability of identifying commercially relevant, economical hydrogen storage materials with broad application. A set of tools was developed to pursue the medium throughput (MT) and high throughput (HT) combinatorial exploratory investigation of novel complex metal hydrides for hydrogen storage. The assay programs consisted of monitoring hydrogen evolution as a function of temperature. This project also incorporated theoretical methods to help select candidate materials families for testing. The Virtual High Throughput Screening served as a virtual laboratory, calculating structures and their properties. First Principles calculations were applied to various systems to examine hydrogen storage reaction pathways and the associated thermodynamics. The experimental program began with the validation of the MT assay tool with NaAlH4/0.02 mole Ti, the state of the art hydrogen storage system given by decomposition of sodium alanate to sodium hydride, aluminum metal, and hydrogen. Once certified, a combinatorial 21-point study of the NaAlH4 ?? LiAlH4 ??Mg(AlH4)2 phase diagram was investigated with the MT assay. Stability proved to be a problem as many of the materials decomposed during synthesis, altering the expected assay results. This resulted in repeating the entire experiment with a mild milling approach, which only temporarily increased capacity. NaAlH4 was the best performer in both studies and no new mixed alanates were observed, a result consistent with the VHTS. Powder XRD suggested that the reverse reaction, the regeneration of the alanate from alkali hydride, Al and hydrogen, was hampering reversibility. The reverse reaction was then studied for the same phase diagram, starting with LiH, NaH, and MgH2, and Al. The study was extended to phase diagrams including KH and CaH2 as well. The observed hydrogen storage capacity in the Al hexahydrides was less than 4 wt. %, well short of DOE targets. The HT assay came on line and after certification with studies on NaAlH4, was first applied to the LiNH2 - LiBH4 - MgH2 phase diagram. The 60-point study elucidated trends within the system locating an optimum material of 0.6 LiNH2 ?? 0.3 MgH2 ?? 0.1 LiBH4 that stored about 4 wt. % H2 reversibly and operated below 220 °C. Also present was the phase Li4(NH2)3BH4, which had been discovered in the LiNH2 -LiBH4 system. This new ternary formulation performed much better than the well-known 2 LiNH2 ?? MgH2 system by 50 °C in the HT assay. The Li4(NH2)3BH4 is a low melting ionic liquid under our test conditions and facilitates the phase transformations required in the hydrogen storage reaction, which no longer relies on a higher energy solid state reaction pathway. Further study showed that the 0.6 LiNH2 ?? 0.3 MgH2 ?? 0.1 LiBH4 formulation was very stable with respect to ammonia and diborane desorption, the observed desorption was from hydrogen. This result could not have been anticipated and was made possible by the efficiency of HT combinatorial methods. Investigation of the analogous LiNH2 ?? LiBH4 ?? CaH2 phase diagram revealed new reversible hydrogen storage materials 0.625 LiBH4 + 0.375 CaH2 and 0.375 LiNH2 + 0.25 LiBH4 + 0.375 CaH2 operating at 1 wt. % reversible hydrogen below 175 °C. Powder x-ray diffraction revealed a new structure for the spent materials which had not been previously observed. While the storage capacity was not impressive, an important aspect is that it boron appears to participate in a low temperature reversible reaction. The last major area of study also focused
Rubert-Nason, Patricia; Mavrikakis, Manos; Maravelias, Christos T.; Grabow, Lars C.; Biegler, Lorenz T.
2014-04-01
Microkinetic models, combined with experimentally measured reaction rates and orders, play a key role in elucidating detailed reaction mechanisms in heterogeneous catalysis and have typically been solved as systems of ordinary differential equations. In this work, we demonstrate a new approach to fitting those models to experimental data. For the specific example treated here, by reformulating a typical microkinetic model for a continuous stirred tank reactor to a system of nonlinear equations, we achieved a 1000-fold increase in solution speed. The reduced computational cost allows a more systematic search of the parameter space, leading to better fits to the available experimental data. We applied this approach to the problem of methanol synthesis by CO/CO2 hydrogenation over a supported-Cu catalyst, an important catalytic reaction with a large industrial interest and potential for large-scale CO2 chemical fixation.
Seismic and Rockphysics Diagnostics of Multiscale Reservoir Textures
Gary Mavko
2005-07-01
This final technical report summarizes the results of the work done in this project. The main objective was to quantify rock microstructures and their effects in terms of elastic impedances in order to quantify the seismic signatures of microstructures. Acoustic microscopy and ultrasonic measurements were used to quantify microstructures and their effects on elastic impedances in sands and shales. The project led to the development of technologies for quantitatively interpreting rock microstructure images, understanding the effects of sorting, compaction and stratification in sediments, and linking elastic data with geologic models to estimate reservoir properties. For the public, ultimately, better technologies for reservoir characterization translates to better reservoir development, reduced risks, and hence reduced energy costs.
Dakota uncertainty quantification methods applied to the NEK-5000 SAHEX model.
Weirs, V. Gregory
2014-03-01
This report summarizes the results of a NEAMS project focused on the use of uncertainty and sensitivity analysis methods within the NEK-5000 and Dakota software framework for assessing failure probabilities as part of probabilistic risk assessment. NEK-5000 is a software tool under development at Argonne National Laboratory to perform computational fluid dynamics calculations for applications such as thermohydraulics of nuclear reactor cores. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. The goal of this work is to demonstrate the use of uncertainty quantification methods in Dakota with NEK-5000.
Clement, T Prabhakar; Barnett, Mark O; Zheng, Chunmiao; Jones, Norman L
2010-05-05
DE-FG02-06ER64213: Development of Modeling Methods and Tools for Predicting Coupled Reactive Transport Processes in Porous Media at Multiple Scales Investigators: T. Prabhakar Clement (PD/PI) and Mark O. Barnett (Auburn), Chunmiao Zheng (Univ. of Alabama), and Norman L. Jones (BYU). The objective of this project was to develop scalable modeling approaches for predicting the reactive transport of metal contaminants. We studied two contaminants, a radioactive cation [U(VI)] and a metal(loid) oxyanion system [As(III/V)], and investigated their interactions with two types of subsurface materials, iron and manganese oxyhydroxides. We also developed modeling methods for describing the experimental results. Overall, the project supported 25 researchers at three universities. Produced 15 journal articles, 3 book chapters, 6 PhD dissertations and 6 MS theses. Three key journal articles are: 1) Jeppu et al., A scalable surface complexation modeling framework for predicting arsenate adsorption on goethite-coated sands, Environ. Eng. Sci., 27(2): 147-158, 2010. 2) Loganathan et al., Scaling of adsorption reactions: U(VI) experiments and modeling, Applied Geochemistry, 24 (11), 2051-2060, 2009. 3) Phillippi, et al., Theoretical solid/solution ratio effects on adsorption and transport: uranium (VI) and carbonate, Soil Sci. Soci. of America, 71:329-335, 2007
Sensitivity of the Properties of Ruthenium Blue Dimer to Method, Basis Set, and Continuum Model
Ozkanlar, Abdullah; Clark, Aurora E.
2012-05-23
The ruthenium blue dimer [(bpy)2RuIIIOH2]2O4+ is best known as the first well-defined molecular catalyst for water oxidation. It has been subject to numerous computational studies primarily employing density functional theory. However, those studies have been limited in the functionals, basis sets, and continuum models employed. The controversy in the calculated electronic structure and the reaction energetics of this catalyst highlights the necessity of benchmark calculations that explore the role of density functionals, basis sets, and continuum models upon the essential features of blue-dimer reactivity. In this paper, we report Kohn-Sham complete basis set (KS-CBS) limit extrapolations of the electronic structure of blue dimer using GGA (BPW91 and BP86), hybrid-GGA (B3LYP), and meta-GGA (M06-L) density functionals. The dependence of solvation free energy corrections on the different cavity types (UFF, UA0, UAHF, UAKS, Bondi, and Pauling) within polarizable and conductor-like polarizable continuum model has also been investigated. The most common basis sets of double-zeta quality are shown to yield results close to the KS-CBS limit; however, large variations are observed in the reaction energetics as a function of density functional and continuum cavity model employed.
Jason Heath; Brian McPherson; Thomas Dewers
2011-03-15
The assessment of caprocks for geologic CO{sub 2} storage is a multi-scale endeavor. Investigation of a regional caprock - the Kirtland Formation, San Juan Basin, USA - at the pore-network scale indicates high capillary sealing capacity and low permeabilities. Core and wellscale data, however, indicate a potential seal bypass system as evidenced by multiple mineralized fractures and methane gas saturations within the caprock. Our interpretation of {sup 4}He concentrations, measured at the top and bottom of the caprock, suggests low fluid fluxes through the caprock: (1) Of the total {sup 4}He produced in situ (i.e., at the locations of sampling) by uranium and thorium decay since deposition of the Kirtland Formation, a large portion still resides in the pore fluids. (2) Simple advection-only and advection-diffusion models, using the measured {sup 4}He concentrations, indicate low permeability ({approx}10-20 m{sup 2} or lower) for the thickness of the Kirtland Formation. These findings, however, do not guarantee the lack of a large-scale bypass system. The measured data, located near the boundary conditions of the models (i.e., the overlying and underlying aquifers), limit our testing of conceptual models and the sensitivity of model parameterization. Thus, we suggest approaches for future studies to better assess the presence or lack of a seal bypass system at this particular site and for other sites in general.
Predictive Simulation and Design of Materials by Quasicontinuum and Accelerated Dynamics Methods
Luskin, Mitchell; James, Richard; Tadmor, Ellad
2014-03-30
This project developed the hyper-QC multiscale method to make possible the computation of previously inaccessible space and time scales for materials with thermally activated defects. The hyper-QC method combines the spatial coarse-graining feature of a finite temperature extension of the quasicontinuum (QC) method (aka “hot-QC”) with the accelerated dynamics feature of hyperdynamics. The hyper-QC method was developed, optimized, and tested from a rigorous mathematical foundation.
Gering, Kevin L.
2013-01-01
A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a Butler-Volmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulse-time dependence, electrode surface availability, or a combination thereof. A set of sigmoid-based expressions may be included with the modified-BV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modified-BV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.
Chen, E.P.; Costin, L.S.
1991-12-31
Pretest analysis of a heated block test, proposed for the Exploratory Studies Facility at Yucca Mountain, Nevada, was conducted in this investigation. Specifically, the study focuses on the evaluation of the various designs to drill holes and cut slots for the block. The thermal/mechanical analysis was based on the finite element method and a compliant-joint rock-mass constitutive model. Based on the calculated results, relative merits of the various test designs are discussed.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Application of Distribution Transformer Thermal Life Models to Electrified Vehicle Charging Loads Using Monte-Carlo Method Preprint Michael Kuss, Tony Markel, and William Kramer Presented at the 25th World Battery, Hybrid and Fuel Cell Electric Vehicle Symposium & Exhibition Shenzhen, China November 5 - 9, 2010 Conference Paper NREL/CP-5400-48827 January 2011 NOTICE The submitted manuscript has been offered by an employee of the Alliance for Sustainable Energy, LLC (Alliance), a contractor
Scenario driven data modelling: a method for integrating diverse sources of data and data streams
Brettin, Thomas S.; Cottingham, Robert W.; Griffith, Shelton D.; Quest, Daniel J.
2015-09-08
A system and method of integrating diverse sources of data and data streams is presented. The method can include selecting a scenario based on a topic, creating a multi-relational directed graph based on the scenario, identifying and converting resources in accordance with the scenario and updating the multi-directed graph based on the resources, identifying data feeds in accordance with the scenario and updating the multi-directed graph based on the data feeds, identifying analytical routines in accordance with the scenario and updating the multi-directed graph using the analytical routines and identifying data outputs in accordance with the scenario and defining queries to produce the data outputs from the multi-directed graph.
Elevated Temperature Primary Load Design Method Using Pseudo Elastic-Perfectly Plastic Model
Carter, Peter; Sham, Sam; Jetter, Robert I
2012-01-01
A new primary load design method for elevated temperature service has been developed. Codification of the procedure in an ASME Boiler and Pressure Vessel Code, Section III Code Case is being pursued. The proposed primary load design method is intended to provide the same margins on creep rupture, yielding and creep deformation for a component or structure that are implicit in the allowable stress data. It provides a methodology that does not require stress classification and is also applicable to a full range of temperature above and below the creep regime. Use of elastic-perfectly plastic analysis based on allowable stress with corrections for constraint, steady state stress and creep ductility is described. This approach is intended to ensure that traditional primary stresses are the basis for design, taking into account ductility limits to stress re-distribution and multiaxial rupture criteria.
Load Modeling and State Estimation Methods for Power Distribution Systems: Final Report
Tom McDermott
2010-05-07
The project objective was to provide robust state estimation for distribution systems, comparable to what has been available on transmission systems for decades. This project used an algorithm called Branch Current State Estimation (BCSE), which is more effective than classical methods because it decouples the three phases of a distribution system, and uses branch current instead of node voltage as a state variable, which is a better match to current measurement.
Bickford, D F; Diemer, Jr, R B
1985-01-01
The redox state of glass from electric melters with complex feed compositions is determined by balance between gases above the melt, and transition metals and organic compounds in the feed. Part I discusses experimental and computational methods of relating flowrates and other melter operating conditions to the redox state of glass, and composition of the melter offgas. Computerized thermodynamic computational methods are useful in predicting the sequence and products of redox reactions and in assessing individual process variations. Melter redox state can be predicted by combining monitoring of melter operating conditions, redox measurement of fused melter feed samples, and periodic redox measurement of product. Mossbauer spectroscopy, and other methods which measure Fe(II)/Fe(III) in glass, can be used to measure melter redox state. Part II develops preliminary operating limits for the vitrification of High-Level Radioactive Waste. Limits on reducing potential to preclude the accumulation of combustible gases, accumulation of sulfides and selenides, and degradation of melter components are the most critical. Problems associated with excessively oxidizing conditions, such as glass foaming and potential ruthenium volatility, are controlled when sufficient formic acid is added to adjust melter feed rheology.
Comparison of two up-scaling methods in poroelasticity and its generalizations
Berryman, J G
2004-03-16
Two methods of up-scaling coupled equations at the microscale to equations valid at the mesoscale and/or macroscale for fluid-saturated and partially saturated porous media are discussed, compared, and contrasted. The two methods are: (1) two-scale and multiscale homogenization, and (2) volume averaging. Both these methods have advantages for some applications and disadvantages for others. For example, homogenization methods can give formulas for coefficients in the up-scaled equations, whereas volume averaging methods give the form of the up-scaled equations but generally must be supplemented with physical arguments and/or data in order to determine the coefficients. Homogenization theory requires a great deal of mathematical insight from the user in order to choose appropriate scalings for use in the resulting power-law expansions, while volume averaging requires more physical insight to motivate the steps needed to find coefficients. Homogenization often is performed on periodic models, while volume averaging does not require any assumption of periodicity and can therefore be related very directly to laboratory and/or field measurements. Validity of the homogenization process is often limited to specific ranges of frequency - in order to justify the scaling hypotheses that must be made - and therefore cannot be used easily over wide ranges of frequency. However, volume averaging methods can quite easily be used for wide band data analysis.
Geoelectrical Measurement of Multi-Scale Mass Transfer Parameters
Day-Lewis, Frederick David; Singha, Kamini; Johnson, Timothy C.; Haggerty, Roy; Binley, Andrew; Lane, John W.
2014-11-25
Mass transfer affects contaminant transport and is thought to control the efficiency of aquifer remediation at a number of sites within the Department of Energy (DOE) complex. An improved understanding of mass transfer is critical to meeting the enormous scientific and engineering challenges currently facing DOE. Informed design of site remedies and long-term stewardship of radionuclide-contaminated sites will require new cost-effective laboratory and field techniques to measure the parameters controlling mass transfer spatially and across a range of scales. In this project, we sought to capitalize on the geophysical signatures of mass transfer. Previous numerical modeling and pilot-scale field experiments suggested that mass transfer produces a geoelectrical signature—a hysteretic relation between sampled (mobile-domain) fluid conductivity and bulk (mobile + immobile) conductivity—over a range of scales relevant to aquifer remediation. In this work, we investigated the geoelectrical signature of mass transfer during tracer transport in a series of controlled experiments to determine the operation of controlling parameters, and also investigated the use of complex-resistivity (CR) as a means of quantifying mass transfer parameters in situ without tracer experiments. In an add-on component to our grant, we additionally considered nuclear magnetic resonance (NMR) to help parse mobile from immobile porosities. Including the NMR component, our revised study objectives were to: 1. Develop and demonstrate geophysical approaches to measure mass-transfer parameters spatially and over a range of scales, including the combination of electrical resistivity monitoring, tracer tests, complex resistivity, nuclear magnetic resonance, and materials characterization; and 2. Provide mass-transfer estimates for improved understanding of contaminant fate and transport at DOE sites, such as uranium transport at the Hanford 300 Area. To achieve our objectives, we implemented a 3-part research plan involving (1) development of computer codes and techniques to estimate mass-transfer parameters from time-lapse electrical data; (2) bench-scale experiments on synthetic materials and materials from cores from the Hanford 300 Area; and (3) field demonstration experiments at the DOE’s Hanford 300 Area. In a synergistic add-on to our workplan, we analyzed data from field experiments performed at the DOE Naturita Site under a separate DOE SBR grant, on which PI Day-Lewis served as co-PI. Techniques developed for application to Hanford datasets also were applied to data from Naturita.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Buceta, David; Tojo, Concha; Vukmirovic, Miomir B.; Deepak, F. Leonard; Lopez-Quintela, M. Arturo
2015-06-02
In this study, we present a theoretical model to predict the atomic structure of Au/Pt nanoparticles synthesized in microemulsions. Excellent concordance with the experimental results shows that the structure of the nanoparticles can be controlled at sub-nanometer resolution simply by changing the reactants concentration. The results of this study not only offer a better understanding of the complex mechanisms governing reactions in microemulsions, but open up a simple new way to synthesize bimetallic nanoparticles with ad-hoc controlled nanostructures.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; et al
2015-05-22
This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more research is needed to understand the limitations of climate models and improve model skill in simulating extreme temperatures and their associated LSMPs. Furthermore, the paper concludes with unresolved issues and research questions.« less
Thermoacoustic wave propagation modeling using a dynamically adaptive wavelet collocation method
Vasilyev, O.V.; Paolucci, S.
1996-12-31
When a localized region of a solid wall surrounding a compressible medium is subjected to a sudden temperature change, the medium in the immediate neighborhood of that region expands. This expansion generates pressure waves. These thermally-generated waves are referred to as thermoacoustic (TAC) waves. The main interest in thermoacoustic waves is motivated by their property to enhance heat transfer by inducing convective motion away from the heated area. Thermoacoustic wave propagation in a two-dimensional rectangular cavity is studied numerically. The thermoacoustic waves are generated by raising the temperature locally at the walls. The waves, which decay at large time due to thermal and viscous diffusion, propagate and reflect from the walls creating complicated two-dimensional patterns. The accuracy of numerical simulation is ensured by using a highly accurate, dynamically adaptive, multilevel wavelet collocation method, which allows local refinements to adapt to local changes in solution scales. Subsequently, high resolution computations are performed only in regions of large gradients. The computational cost of the method is independent of the dimensionality of the problem and is O(N), where N is the total number of collation points.
Methods for modeling impact-induced reactivity changes in small reactors.
Tallman, Tyler N.; Radel, Tracy E.; Smith, Jeffrey A.; Villa, Daniel L.; Smith, Brandon M.; Radel, Ross F.; Lipinski, Ronald J.; Wilson, Paul Philip Hood
2010-10-01
This paper describes techniques for determining impact deformation and the subsequent reactivity change for a space reactor impacting the ground following a potential launch accident or for large fuel bundles in a shipping container following an accident. This technique could be used to determine the margin of subcriticality for such potential accidents. Specifically, the approach couples a finite element continuum mechanics model (Pronto3D or Presto) with a neutronics code (MCNP). DAGMC, developed at the University of Wisconsin-Madison, is used to enable MCNP geometric queries to be performed using Pronto3D output. This paper summarizes what has been done historically for reactor launch analysis, describes the impact criticality analysis methodology, and presents preliminary results using representative reactor designs.
Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert
2015-12-08
Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.
Sabtaji, Agung E-mail: agung.sabtaji@bmkg.go.id; Nugraha, Andri Dian
2015-04-24
West Papua region has fairly high of seismicity activities due to tectonic setting and many inland faults. In addition, the region has a unique and complex tectonic conditions and this situation lead to high potency of seismic hazard in the region. The precise earthquake hypocenter location is very important, which could provide high quality of earthquake parameter information and the subsurface structure in this region to the society. We conducted 1-D P-wave velocity using earthquake data catalog from BMKG for April, 2009 up to March, 2014 around West Papua region. The obtained 1-D seismic velocity then was used as input for improving hypocenter location using double-difference method. The relocated hypocenter location shows fairly clearly the pattern of intraslab earthquake beneath New Guinea Trench (NGT). The relocated hypocenters related to the inland fault are also observed more focus in location around the fault.
Advanced modeling to accelerate the scale up of carbon capture technologies
Miller, David C.; Sun, XIN; Storlie, Curtis B.; Bhattacharyya, Debangsu
2015-06-01
In order to help meet the goals of the DOE carbon capture program, the Carbon Capture Simulation Initiative (CCSI) was launched in early 2011 to develop, demonstrate, and deploy advanced computational tools and validated multi-scale models to reduce the time required to develop and scale-up new carbon capture technologies. This article focuses on essential elements related to the development and validation of multi-scale models in order to help minimize risk and maximize learning as new technologies progress from pilot to demonstration scale.
Using Multi-scale Dynamic Rupture Models to Improve Ground Motion...
Office of Scientific and Technical Information (OSTI)
APA Chicago Bibtex Export Metadata Endnote Excel CSV XML Save to My Library Send to Email Send to Email Email address: Content: Close Send Cite: MLA Format Close Cite: APA ...
Using Multi-scale Dynamic Rupture Models to Improve Ground Motion...
Office of Scientific and Technical Information (OSTI)
Sponsoring Org: SC OFFICE OF ADVANCED SCIENTIFIC COMPUTING RESEARC Country of Publication: United States Language: ENGLISH Word Cloud More Like This Full Text preview image File ...
Multi-Scale Modeling Tools to Enable Manufacturing-Informed Design...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
More Documents & Publications Vehicle Technologies Office Merit Review 2014: Development of Computer-Aided Design Tools for Automotive Batteries Vehicle Technologies Office Merit ...
Multi-scale thermalhydraulic analyses performed in Nuresim and Nurisp projects
Bestion, D.; Lucas, D.; Anglart, H.; Niceno, B.; Vyskocil, L.
2012-07-01
The NURESIM and NURISP successive projects of the 6. and 7. European Framework Programs joined the efforts of 21 partners for developing and validating a reference multi-physics and multi-scale platform for reactor simulation. The platform includes system codes, component codes, and also CFD or CMFD simulation tools. Fine scale CFD simulations are useful for a better understanding of physical processes, for the prediction of small scale geometrical effects and for solving problems that require a fine space and/or time resolution. Many important safety issues usually treated at the system scale may now benefit from investigations at a CFD scale. The Pressurized Thermal Shock is investigated using several simulation scales including Direct Numerical Simulation, Large Eddy Simulation, Very Large Eddy Simulation and RANS approaches. At the end a coupling of system code and CFD is applied. Condensation Induced Water-Hammer was also investigated at both CFD and 1-D scale. Boiling flow in a reactor core up to Departure from Nucleate Boiling or Dry-Out is investigated at scales much smaller than the classical subchannel analysis codes. DNS was used to investigate very local processes whereas CFD in both RANS and LES was used to simulate bubbly flow and Euler-Lagrange simulations were used for annular mist flow investigations. Loss of Coolant Accidents are usually treated by system codes. Some related issues are now revisited at the CFD scale. In each case the progress of the analysis is summarized and the benefit of the multi-scale approach is shown. (authors)
Advances in coupled safety modeling using systems analysis and high-fidelity methods.
Fanning, T. H.; Thomas, J. W.; Nuclear Engineering Division
2010-05-31
The potential for a sodium-cooled fast reactor to survive severe accident initiators with no damage has been demonstrated through whole-plant testing in EBR-II and FFTF. Analysis of the observed natural protective mechanisms suggests that they would be characteristic of a broad range of sodium-cooled fast reactors utilizing metal fuel. However, in order to demonstrate the degree to which new, advanced sodium-cooled fast reactor designs will possess these desired safety features, accurate, high-fidelity, whole-plant dynamics safety simulations will be required. One of the objectives of the advanced safety-modeling component of the Reactor IPSC is to develop a science-based advanced safety simulation capability by utilizing existing safety simulation tools coupled with emerging high-fidelity modeling capabilities in a multi-resolution approach. As part of this integration, an existing whole-plant systems analysis code has been coupled with a high-fidelity computational fluid dynamics code to assess the impact of high-fidelity simulations on safety-related performance. With the coupled capabilities, it is possible to identify critical safety-related phenomenon in advanced reactor designs that cannot be resolved with existing tools. In this report, the impact of coupling is demonstrated by evaluating the conditions of outlet plenum thermal stratification during a protected loss of flow transient. Outlet plenum stratification was anticipated to alter core temperatures and flows predicted during natural circulation conditions. This effect was observed during the simulations. What was not anticipated, however, is the far-reaching impact that resolving thermal stratification has on the whole plant. The high temperatures predicted at the IHX inlet due to thermal stratification in the outlet plenum forces heat into the intermediate system to the point that it eventually becomes a source of heat for the primary system. The results also suggest that flow stagnation in the intermediate system is possible, raising questions about the effectiveness of the intermediate decay heat removal systems in the design that was evaluated. Existing tools do not predict flow stagnation. This work has demonstrated that with a proper coupling approach, a high-fidelity CFD tool can be used to resolve the important flow and temperature distributions throughout a plant while still maintaining the whole-plant safety analysis capabilities of a systems analysis code.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
in warm dense matter experiments with diffuse interface methods in the ALE-AMR code Wangyi Liu ∗ , John Barnard, Alex Friedman, Nathan Masters, Aaron Fisher, Velemir Mlaker, Alice Koniges, David Eder † August 4, 2011 Abstract In this paper we describe an implementation of a single-fluid inter- face model in the ALE-AMR code to simulate surface tension effects. The model does not require explicit information on the physical state of the two phases. The only change to the existing fluid
McNunn, Gabriel S; Bryden, Kenneth M
2013-01-01
Tarjan's algorithm schedules the solution of systems of equations by noting the coupling and grouping between the equations. Simulating complex systems, e.g., advanced power plants, aerodynamic systems, or the multi-scale design of components, requires the linkage of large groups of coupled models. Currently, this is handled manually in systems modeling packages. That is, the analyst explicitly defines both the method and solution sequence necessary to couple the models. In small systems of models and equations this works well. However, as additional detail is needed across systems and across scales, the number of models grows rapidly. This precludes the manual assembly of large systems of federated models, particularly in systems composed of high fidelity models. This paper examines extending Tarjan's algorithm from sets of equations to sets of models. The proposed implementation of the algorithm is demonstrated using a small one-dimensional system of federated models representing the heat transfer and thermal stress in a gas turbine blade with thermal barrier coating. Enabling the rapid assembly and substitution of different models permits the rapid turnaround needed to support the what-if kinds of questions that arise in engineering design.
Mishra, Srikanta; Jin, Larry; He, Jincong; Durlofsky, Louis
2015-06-30
Reduced-order models provide a means for greatly accelerating the detailed simulations that will be required to manage CO2 storage operations. In this work, we investigate the use of one such method, POD-TPWL, which has previously been shown to be effective in oil reservoir simulation problems. This method combines trajectory piecewise linearization (TPWL), in which the solution to a new (test) problem is represented through a linearization around the solution to a previously-simulated (training) problem, with proper orthogonal decomposition (POD), which enables solution states to be expressed in terms of a relatively small number of parameters. We describe the application of POD-TPWL for CO2-water systems simulated using a compositional procedure. Stanford’s Automatic Differentiation-based General Purpose Research Simulator (AD-GPRS) performs the full-order training simulations and provides the output (derivative matrices and system states) required by the POD-TPWL method. A new POD-TPWL capability introduced in this work is the use of horizontal injection wells that operate under rate (rather than bottom-hole pressure) control. Simulation results are presented for CO2 injection into a synthetic aquifer and into a simplified model of the Mount Simon formation. Test cases involve the use of time-varying well controls that differ from those used in training runs. Results of reasonable accuracy are consistently achieved for relevant well quantities. Runtime speedups of around a factor of 370 relative to full- order AD-GPRS simulations are achieved, though the preprocessing needed for POD-TPWL model construction corresponds to the computational requirements for about 2.3 full-order simulation runs. A preliminary treatment for POD-TPWL modeling in which test cases differ from training runs in terms of geological parameters (rather than well controls) is also presented. Results in this case involve only small differences between training and test runs, though they do demonstrate that the approach is able to capture basic solution trends. The impact of some of the detailed numerical treatments within the POD-TPWL formulation is considered in an Appendix. ii
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Caterpillar, Sandia CRADA Opens Door to Multiple Research Projects Capabilities, Computational Modeling & Simulation, CRF, Materials Science, Modeling, Modeling, Modeling & ...
Modeling Deep Burn TRISO Particle Nuclear Fuel
Besmann, Theodore M [ORNL; Stoller, Roger E [ORNL; Samolyuk, German D [ORNL; Schuck, Paul C [ORNL; Rudin, Sven [Los Alamos National Laboratory (LANL); Wills, John [Los Alamos National Laboratory (LANL); Wirth, Brian D. [University of California, Berkeley; Kim, Sungtae [University of Wisconsin, Madison; Morgan, Dane [University of Wisconsin, Madison; Szlufarska, Izabela [University of Wisconsin, Madison
2012-01-01
Under the DOE Deep Burn program TRISO fuel is being investigated as a fuel form for consuming plutonium and minor actinides, and for greater efficiency in uranium utilization. The result will thus be to drive TRISO particulate fuel to very high burn-ups. In the current effort the various phenomena in the TRISO particle are being modeled using a variety of techniques. The chemical behavior is being treated utilizing thermochemical analysis to identify phase formation/transformation and chemical activities in the particle, including kernel migration. First principles calculations are being used to investigate the critical issue of fission product palladium attack on the SiC coating layer. Density functional theory is being used to understand fission product diffusion within the plutonia oxide kernel. Kinetic Monte Carlo techniques are shedding light on transport of fission products, most notably silver, through the carbon and SiC coating layers. The diffusion of fission products through an alternative coating layer, ZrC, is being assessed via DFT methods. Finally, a multiscale approach is being used to understand thermal transport, including the effect of radiation damage induced defects, in a model SiC material.
Bogenschutz, Peter; Moeng, Chin-Hoh
2015-10-13
The PI’s at the National Center for Atmospheric Research (NCAR), Chin-Hoh Moeng and Peter Bogenschutz, have primarily focused their time on the implementation of the Simplified-Higher Order Turbulence Closure (SHOC; Bogenschutz and Krueger 2013) to the Multi-scale Modeling Framework (MMF) global model and testing of SHOC on deep convective cloud regimes.
Theory and Modeling Capabilities | Argonne National Laboratory
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Theory and Modeling Capabilities Theory and multiscale computer simulations provide the interpretive and predictive framework to understand fundamental processes and to aid in the design of functional nanoscale systems. Our primary facility is a high-performance computing cluster accommodating parallel computer-intensive applications. Capabilities Carbon High-Performance Computing Cluster (3000 cores, 30 GPUs, ~30 TeraFLOPS) Development tools (GNU and Intel compilers and math libraries) Density
Kuss, M.; Markel, T.; Kramer, W.
2011-01-01
Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
diffuse interface methods in ALE-AMR code with application in modeling NDCX-II experiments Wangyi Liu 1 , John Barnard 2 , Alex Friedman 2 , Nathan Masters 2 , Aaron Fisher 2 , Alice Koniges 2 , David Eder 2 1 LBNL, USA, 2 LLNL, USA This work was part of the Petascale Initiative in Computational Science at NERSC, supported by the Director, Office of Science, Advanced Scientific Computing Research, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work was performed
Physics-based multiscale coupling for full core nuclear reactor simulation
Gaston, Derek R.; Permann, Cody J.; Peterson, John W.; Slaughter, Andrew E.; Andrš, David; Wang, Yaqi; Short, Michael P.; Perez, Danielle M.; Tonks, Michael R.; Ortensi, Javier; Zou, Ling; Martineau, Richard C.
2015-10-01
Numerical simulation of nuclear reactors is a key technology in the quest for improvements in efficiency, safety, and reliability of both existing and future reactor designs. Historically, simulation of an entire reactor was accomplished by linking together multiple existing codes that each simulated a subset of the relevant multiphysics phenomena. Recent advances in the MOOSE (Multiphysics Object Oriented Simulation Environment) framework have enabled a new approach: multiple domain-specific applications, all built on the same software framework, are efficiently linked to create a cohesive application. This is accomplished with a flexible coupling capability that allows for a variety of different data exchanges to occur simultaneously on high performance parallel computational hardware. Examples based on the KAIST-3A benchmark core, as well as a simplified Westinghouse AP-1000 configuration, demonstrate the power of this new framework for tackling—in a coupled, multiscale manner—crucial reactor phenomena such as CRUD-induced power shift and fuel shuffle. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license
Physics-based multiscale coupling for full core nuclear reactor simulation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gaston, Derek R.; Permann, Cody J.; Peterson, John W.; Slaughter, Andrew E.; Andrš, David; Wang, Yaqi; Short, Michael P.; Perez, Danielle M.; Tonks, Michael R.; Ortensi, Javier; et al
2015-10-01
Numerical simulation of nuclear reactors is a key technology in the quest for improvements in efficiency, safety, and reliability of both existing and future reactor designs. Historically, simulation of an entire reactor was accomplished by linking together multiple existing codes that each simulated a subset of the relevant multiphysics phenomena. Recent advances in the MOOSE (Multiphysics Object Oriented Simulation Environment) framework have enabled a new approach: multiple domain-specific applications, all built on the same software framework, are efficiently linked to create a cohesive application. This is accomplished with a flexible coupling capability that allows for a variety of different datamore » exchanges to occur simultaneously on high performance parallel computational hardware. Examples based on the KAIST-3A benchmark core, as well as a simplified Westinghouse AP-1000 configuration, demonstrate the power of this new framework for tackling—in a coupled, multiscale manner—crucial reactor phenomena such as CRUD-induced power shift and fuel shuffle. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license« less
Augmenting epidemiological models with point-of-care diagnostics data
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.; Ozmen, Ozgur
2016-04-20
Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less
Lunquist, K A; Chow, F K; Lundquist, J K; Mirocha, J D
2007-09-04
Flow and dispersion processes in urban areas are profoundly influenced by the presence of buildings which divert mean flow, affect surface heating and cooling, and alter the structure of turbulence in the lower atmosphere. Accurate prediction of velocity, temperature, and turbulent kinetic energy fields are necessary for determining the transport and dispersion of scalars. Correct predictions of scalar concentrations are vital in densely populated urban areas where they are used to aid in emergency response planning for accidental or intentional releases of hazardous substances. Traditionally, urban flow simulations have been performed by computational fluid dynamics (CFD) codes which can accommodate the geometric complexity inherent to urban landscapes. In these types of models the grid is aligned with the solid boundaries, and the boundary conditions are applied to the computational nodes coincident with the surface. If the CFD code uses a structured curvilinear mesh, then time-consuming manual manipulation is needed to ensure that the mesh conforms to the solid boundaries while minimizing skewness. If the CFD code uses an unstructured grid, then the solver cannot be optimized for the underlying data structure which takes an irregular form. Unstructured solvers are therefore often slower and more memory intensive than their structured counterparts. Additionally, urban-scale CFD models are often forced at lateral boundaries with idealized flow, neglecting dynamic forcing due to synoptic scale weather patterns. These CFD codes solve the incompressible Navier-Stokes equations and include limited options for representing atmospheric processes such as surface fluxes and moisture. Traditional CFD codes therefore posses several drawbacks, due to the expense of either creating the grid or solving the resulting algebraic system of equations, and due to the idealized boundary conditions and the lack of full atmospheric physics. Meso-scale atmospheric boundary layer simulations, on the other hand, are performed by numerical weather prediction (NWP) codes, which cannot handle the geometry of the urban landscape, but do provide a more complete representation of atmospheric physics. NWP codes typically use structured grids with terrain-following vertical coordinates, include a full suite of atmospheric physics parameterizations, and allow for dynamic synoptic scale lateral forcing through grid nesting. Terrain following grids are unsuitable for urban terrain, as steep terrain gradients cause extreme distortion of the computational cells. In this work, we introduce and develop an immersed boundary method (IBM) to allow the favorable properties of a numerical weather prediction code to be combined with the ability to handle complex terrain. IBM uses a non-conforming structured grid, and allows solid boundaries to pass through the computational cells. As the terrain passes through the mesh in an arbitrary manner, the main goal of the IBM is to apply the boundary condition on the interior of the domain as accurately as possible. With the implementation of the IBM, numerical weather prediction codes can be used to explicitly resolve urban terrain. Heterogeneous urban domains using the IBM can be nested into larger mesoscale domains using a terrain-following coordinate. The larger mesoscale domain provides lateral boundary conditions to the urban domain with the correct forcing, allowing seamless integration between mesoscale and urban scale models. Further discussion of the scope of this project is given by Lundquist et al. [2007]. The current paper describes the implementation of an IBM into the Weather Research and Forecasting (WRF) model, which is an open source numerical weather prediction code. The WRF model solves the non-hydrostatic compressible Navier-Stokes equations, and employs an isobaric terrain-following vertical coordinate. Many types of IB methods have been developed by researchers; a comprehensive review can be found in Mittal and Iaccarino [2005]. To the authors knowledge, this is the first IBM approach that is able to
Balcomb, J.D.
1981-01-01
Correlation methods have been developed to provide a quick and relatively simple technique for estimating the performance of passive solar systems. The correlations are done with respect to data generated from simulation models. The techniques and accuracies are described. Both the Solar Load Ratio and Un-Utilizability methods are described. The advantages and limitations of correlation methods as design tools are discussed.
V. Chipman
2002-10-31
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their postclosure analyses.
Search for: All records | DOE PAGES
Office of Scientific and Technical Information (OSTI)
... limited understanding of process coupling combined with ... methods in which microscale and macroscale models are explicitly coupled in a single hybrid multiscale simulation. ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Permalink Wind Generator Modeling Computational Modeling & Simulation, Energy, Energy Surety, Grid Integration, Infrastructure Security, Modeling, Modeling & Analysis, News, News & Events, Renewable Energy, SMART Grid, Systems Analysis, Transmission Grid Integration, Wind Energy Wind Generator Modeling This modular block diagram represents the major components of the generic dynamic wind turbine generator models. Model blocks and parameters are used to represent the different wind
Sullivan, P.; Eurek, K.; Margolis, R.
2014-07-01
Because solar power is a rapidly growing component of the electricity system, robust representations of solar technologies should be included in capacity-expansion models. This is a challenge because modeling the electricity system--and, in particular, modeling solar integration within that system--is a complex endeavor. This report highlights the major challenges of incorporating solar technologies into capacity-expansion models and shows examples of how specific models address those challenges. These challenges include modeling non-dispatchable technologies, determining which solar technologies to model, choosing a spatial resolution, incorporating a solar resource assessment, and accounting for solar generation variability and uncertainty.
Dynamics of a spherical particle in an acoustic field: A multiscale approach
Xie, Jin-Han, E-mail: J.H.Xie@ed.ac.uk; Vanneste, Jacques [School of Mathematics and Maxwell Institute for Mathematical Sciences, University of Edinburgh, Edinburgh EH9 3JZ (United Kingdom)
2014-10-15
A rigid spherical particle in an acoustic wave field oscillates at the wave period but has also a mean motion on a longer time scale. The dynamics of this mean motion is crucial for numerous applications of acoustic microfluidics, including particle manipulation and flow visualisation. It is controlled by four physical effects: acoustic (radiation) pressure, streaming, inertia, and viscous drag. In this paper, we carry out a systematic multiscale analysis of the problem in order to assess the relative importance of these effects depending on the parameters of the system that include wave amplitude, wavelength, sound speed, sphere radius, and viscosity. We identify two distinguished regimes characterised by a balance among three of the four effects, and we derive the equations that govern the mean particle motion in each regime. This recovers and organises classical results by King [On the acoustic radiation pressure on spheres, Proc. R. Soc. A 147, 212240 (1934)], Gor'kov [On the forces acting on a small particle in an acoustical field in an ideal fluid, Sov. Phys. 6, 773775 (1962)], and Doinikov [Acoustic radiation pressure on a rigid sphere in a viscous fluid, Proc. R. Soc. London A 447, 447466 (1994)], clarifies the range of validity of these results, and reveals a new nonlinear dynamical regime. In this regime, the mean motion of the particle remains intimately coupled to that of the surrounding fluid, and while viscosity affects the fluid motion, it plays no part in the acoustic pressure. Simplified equations, valid when only two physical effects control the particle motion, are also derived. They are used to obtain sufficient conditions for the particle to behave as a passive tracer of the Lagrangian-mean fluid motion.
Comparison of up-scaling methods in poroelasticity and its generalizations
Berryman, J G
2003-12-13
Four methods of up-scaling coupled equations at the microscale to equations valid at the mesoscale and/or macroscale for fluid-saturated and partially saturated porous media will be discussed, compared, and contrasted. The four methods are: (1) effective medium theory, (2) mixture theory, (3) two-scale and multiscale homogenization, and (4) volume averaging. All these methods have advantages for some applications and disadvantages for others. For example, effective medium theory, mixture theory, and homogenization methods can all give formulas for coefficients in the up-scaled equations, whereas volume averaging methods give the form of the up-scaled equations but generally must be supplemented with physical arguments and/or data in order to determine the coefficients. Homogenization theory requires a great deal of mathematical insight from the user in order to choose appropriate scalings for use in the resulting power-law expansions, while volume averaging requires more physical insight to motivate the steps needed to find coefficients. Homogenization often is performed on periodic models, while volume averaging does not require any assumption of periodicity and can therefore be related very directly to laboratory and/or field measurements. Validity of the homogenization process is often limited to specific ranges of frequency - in order to justify the scaling hypotheses that must be made - and therefore cannot be used easily over wide ranges of frequency. However, volume averaging methods can quite easily be used for wide band data analysis. So, we learn from these comparisons that a researcher in the theory of poroelasticity and its generalizations needs to be conversant with two or more of these methods to solve problems generally.
Gehin, J.C.; Worley, B.A.; Renier, J.P.; Wemple, C.A.; Jahshan, S.N.; Ryskammp, J.M.
1995-08-01
This report summarizes the neutronics analysis performed during 1991 and 1992 in support of characterization of the conceptual design of the Advanced Neutron Source (ANS). The methods used in the analysis, parametric studies, and key results supporting the design and safety evaluations of the conceptual design are presented. The analysis approach used during the conceptual design phase followed the same approach used in early ANS evaluations: (1) a strong reliance on Monte Carlo theory for beginning-of-cycle reactor performance calculations and (2) a reliance on few-group diffusion theory for reactor fuel cycle analysis and for evaluation of reactor performance at specific time steps over the fuel cycle. The Monte Carlo analysis was carried out using the MCNP continuous-energy code, and the few- group diffusion theory calculations were performed using the VENTURE and PDQ code systems. The MCNP code was used primarily for its capability to model the reflector components in realistic geometries as well as the inherent circumvention of cross-section processing requirements and use of energy-collapsed cross sections. The MCNP code was used for evaluations of reflector component reactivity effects and of heat loads in these components. The code was also used as a benchmark comparison against the diffusion-theory estimates of key reactor parameters such as region fluxes, control rod worths, reactivity coefficients, and material worths. The VENTURE and PDQ codes were used to provide independent evaluations of burnup effects, power distributions, and small perturbation worths. The performance and safety calculations performed over the subject time period are summarized, and key results are provided. The key results include flux and power distributions over the fuel cycle, silicon production rates, fuel burnup rates, component reactivities, control rod worths, component heat loads, shutdown reactivity margins, reactivity coefficients, and isotope production rates.
Bernard, S.; Horsfield, B; Schultz, H; Schreiber, A; Wirth, R; Thi AnhVu, T; Perssen, F; Konitzer, S; Volk, H; et. al.
2010-01-01
Organic geochemical analyses, including solvent extraction or pyrolysis, followed by gas chromatography and mass spectrometry, are generally conducted on bulk gas shale samples to evaluate their source and reservoir properties. While organic petrology has been directed at unravelling the matrix composition and textures of these economically important unconventional resources, their spatial variability in chemistry and structure is still poorly documented at the sub-micrometre scale. Here, a combination of techniques including transmission electron microscopy and a synchrotron-based microscopy tool, scanning transmission X-ray microscopy, have been used to characterize at a multiple length scale an overmature organic-rich calcareous mudstone from northern Germany. We document multi-scale chemical and mineralogical heterogeneities within the sample, from the millimetre down to the nanometre-scale. From the detection of different types of bitumen and authigenic minerals associated with the organic matter, we show that the multi-scale approach used in this study may provide new insights into gaseous hydrocarbon generation/retention processes occurring within gas shales and may shed new light on their thermal history.
Shen, Chen
2014-01-20
The goal of this project is to model creep-fatigue-environment interactions in steam turbine rotor materials for advanced ultra-supercritical (A-USC) coal power Alloy 282 plants, to develop and demonstrate computational algorithms for alloy property predictions, and to determine and model key mechanisms that contribute to the damages caused by creep-fatigue-environment interactions. The nickel based Alloy 282 is selected for this project because it is one of the leading candidate materials for the high temperature/pressure section of an A-USC steam turbine. The methods developed in the project are expected to be applicable to other metal alloys in similar steam/oxidation environments. The major developments are: failure mechanism and microstructural characterization atomistic and first principles modeling of crack tip oxygen embrittlement modeling of gamma prime microstructures and mesoscale microstructure-defect interactions microstructure and damage-based creep prediction multi-scale crack growth modeling considering oxidation, viscoplasticity and fatigue The technology developed in this project is expected to enable more accurate prediction of long service life of advanced alloys for A-USC power plants, and provide faster and more effective materials design, development, and implementation than current state-of-the-art computational and experimental methods. This document is a final technical report for the project, covering efforts conducted from January 2011 to January 2014.
Shi, Xing; Lin, Guang
2014-11-01
To model the sedimentation of the red blood cell (RBC) in a square duct and a circular pipe, the recently developed technique derived from the lattice Boltzmann method and the distributed Lagrange multiplier/fictitious domain method (LBM-DLM/FD) is extended to employ the mesoscopic network model for simulations of the sedimentation of the RBC in flow. The flow is simulated by the lattice Boltzmann method with a strong magnetic body force, while the network model is used for modeling RBC deformation. The fluid-RBC interactions are enforced by the Lagrange multiplier. The sedimentation of the RBC in a square duct and a circular pipe is simulated, revealing the capacity of the current method for modeling the sedimentation of RBC in various flows. Numerical results illustrate that that the terminal setting velocity increases with the increment of the exerted body force. The deformation of the RBC has significant effect on the terminal setting velocity due to the change of the frontal area. The larger the exerted force is, the smaller the frontal area and the larger deformation of the RBC are.
Fully implicit Particle-in-cell algorithms for multiscale plasma simulation
Chacon, Luis
2015-07-16
The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PIC only, reduced dimensionality). The approach is free of numerical instabilities: ω_{pe}Δt >> 1, and Δx >> λ_{D}. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N_{FE}, leading to an optimal algorithm.
Three-Dimensional Modeling of the Reactive Transport of CO2 and Its Impact
Office of Scientific and Technical Information (OSTI)
on Geomechanical Properties of Reservoir Rocks and Seals (Journal Article) | SciTech Connect Three-Dimensional Modeling of the Reactive Transport of CO2 and Its Impact on Geomechanical Properties of Reservoir Rocks and Seals Citation Details In-Document Search Title: Three-Dimensional Modeling of the Reactive Transport of CO2 and Its Impact on Geomechanical Properties of Reservoir Rocks and Seals This article develops a novel multiscale modeling approach to analyze CO2 reservoirs using
Price, Phillip N.; Granderson, Jessica; Sohn, Michael; Addy, Nathan; Jump, David
2013-09-01
The overarching goal of this work is to advance the capabilities of technology evaluators in evaluating the building-level baseline modeling capabilities of Energy Management and Information System (EMIS) software. Through their customer engagement platforms and products, EMIS software products have the potential to produce whole-building energy savings through multiple strategies: building system operation improvements, equipment efficiency upgrades and replacements, and inducement of behavioral change among the occupants and operations personnel. Some offerings may also automate the quantification of whole-building energy savings, relative to a baseline period, using empirical models that relate energy consumption to key influencing parameters, such as ambient weather conditions and building operation schedule. These automated baseline models can be used to streamline the whole-building measurement and verification (M&V) process, and therefore are of critical importance in the context of multi-measure whole-building focused utility efficiency programs. This report documents the findings of a study that was conducted to begin answering critical questions regarding quantification of savings at the whole-building level, and the use of automated and commercial software tools. To evaluate the modeling capabilities of EMIS software particular to the use case of whole-building savings estimation, four research questions were addressed: 1. What is a general methodology that can be used to evaluate baseline model performance, both in terms of a) overall robustness, and b) relative to other models? 2. How can that general methodology be applied to evaluate proprietary models that are embedded in commercial EMIS tools? How might one handle practical issues associated with data security, intellectual property, appropriate testing ‘blinds’, and large data sets? 3. How can buildings be pre-screened to identify those that are the most model-predictable, and therefore those whose savings can be calculated with least error? 4. What is the state of public domain models, that is, how well do they perform, and what are the associated implications for whole-building measurement and verification (M&V)? Additional project objectives that were addressed as part of this study include: (1) clarification of the use cases and conditions for baseline modeling performance metrics, benchmarks and evaluation criteria, (2) providing guidance for determining customer suitability for baseline modeling, (3) describing the portfolio level effects of baseline model estimation errors, (4) informing PG&E’s development of EMIS technology product specifications, and (5) providing the analytical foundation for future studies about baseline modeling and saving effects of EMIS technologies. A final objective of this project was to demonstrate the application of the methodology, performance metrics, and test protocols with participating EMIS product vendors.
M Ali, M. K. E-mail: eutoco@gmail.com; Ruslan, M. H. E-mail: eutoco@gmail.com; Muthuvalu, M. S. E-mail: jumat@ums.edu.my; Wong, J. E-mail: jumat@ums.edu.my; Sulaiman, J. E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md. E-mail: hafidzruslan@eng.ukm.my
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
PVLibMatlab Permalink Gallery Sandia Labs Releases New Version of PVLib Toolbox Modeling, News, Photovoltaic, Solar Sandia Labs Releases New Version of PVLib Toolbox Sandia has released version 1.3 of PVLib, its widely used Matlab toolbox for modeling photovoltaic (PV) power systems. The version 1.3 release includes the following added functions: functions to estimate parameters for popular PV module models, including PVsyst and the CEC '5 parameter' model a new model of the effects of solar
An interface tracking model for droplet electrocoalescence.
Erickson, Lindsay Crowl
2013-09-01
This report describes an Early Career Laboratory Directed Research and Development (LDRD) project to develop an interface tracking model for droplet electrocoalescence. Many fluid-based technologies rely on electrical fields to control the motion of droplets, e.g. microfluidic devices for high-speed droplet sorting, solution separation for chemical detectors, and purification of biodiesel fuel. Precise control over droplets is crucial to these applications. However, electric fields can induce complex and unpredictable fluid dynamics. Recent experiments (Ristenpart et al. 2009) have demonstrated that oppositely charged droplets bounce rather than coalesce in the presence of strong electric fields. A transient aqueous bridge forms between approaching drops prior to pinch-off. This observation applies to many types of fluids, but neither theory nor experiments have been able to offer a satisfactory explanation. Analytic hydrodynamic approximations for interfaces become invalid near coalescence, and therefore detailed numerical simulations are necessary. This is a computationally challenging problem that involves tracking a moving interface and solving complex multi-physics and multi-scale dynamics, which are beyond the capabilities of most state-of-the-art simulations. An interface-tracking model for electro-coalescence can provide a new perspective to a variety of applications in which interfacial physics are coupled with electrodynamics, including electro-osmosis, fabrication of microelectronics, fuel atomization, oil dehydration, nuclear waste reprocessing and solution separation for chemical detectors. We present a conformal decomposition finite element (CDFEM) interface-tracking method for the electrohydrodynamics of two-phase flow to demonstrate electro-coalescence. CDFEM is a sharp interface method that decomposes elements along fluid-fluid boundaries and uses a level set function to represent the interface.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Engine Combustion/Modeling Modelingadmin2015-10-28T01:54:52+00:00 Modelers at the CRF are developing high-fidelity simulation tools for engine combustion and detailed micro-kinetic, surface chemistry modeling tools for catalyst-based exhaust aftertreatment systems. The engine combustion modeling is focused on developing Large Eddy Simulation (LES). LES is being used with closely coupled key target experiments to reveal new understanding of the fundamental processes involved in engine combustion
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Reacting Flow/Modeling Modelingadmin2015-10-28T02:39:13+00:00 Turbulence models typically involve coarse-graining and/or time averaging. Though adequate for modeling mean transport, this approach does not address turbulence-microphysics interactions that are important in combustion processes. Subgrid models are developed to represent these interactions. The CRF has developed a fundamentally different representation of these interactions that does not involve distinct coarse-grained and subgrid
Robinson, Mark R.; Ward, Kenneth J.; Eaton, Robert P.; Haaland, David M.
1990-01-01
The characteristics of a biological fluid sample having an analyte are determined from a model constructed from plural known biological fluid samples. The model is a function of the concentration of materials in the known fluid samples as a function of absorption of wideband infrared energy. The wideband infrared energy is coupled to the analyte containing sample so there is differential absorption of the infrared energy as a function of the wavelength of the wideband infrared energy incident on the analyte containing sample. The differential absorption causes intensity variations of the infrared energy incident on the analyte containing sample as a function of sample wavelength of the energy, and concentration of the unknown analyte is determined from the thus-derived intensity variations of the infrared energy as a function of wavelength from the model absorption versus wavelength function.
Wang, Ruofan; Wang, Jiang; Deng, Bin Liu, Chen; Wei, Xile; Tsang, K. M.; Chan, W. L.
2014-03-15
A combined method composing of the unscented Kalman filter (UKF) and the synchronization-based method is proposed for estimating electrophysiological variables and parameters of a thalamocortical (TC) neuron model, which is commonly used for studying Parkinson's disease for its relay role of connecting the basal ganglia and the cortex. In this work, we take into account the condition when only the time series of action potential with heavy noise are available. Numerical results demonstrate that not only this method can estimate model parameters from the extracted time series of action potential successfully but also the effect of its estimation is much better than the only use of the UKF or synchronization-based method, with a higher accuracy and a better robustness against noise, especially under the severe noise conditions. Considering the rather important role of TC neuron in the normal and pathological brain functions, the exploration of the method to estimate the critical parameters could have important implications for the study of its nonlinear dynamics and further treatment of Parkinson's disease.
Marzouk, Youssef; Fast P. (Lawrence Livermore National Laboratory, Livermore, CA); Kraus, M.; Ray, J. P.
2006-01-01
Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that these data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.
Three-dimensional Dendritic Needle Network model with application to Al-Cu
Office of Scientific and Technical Information (OSTI)
directional solidification experiments (Journal Article) | SciTech Connect Journal Article: Three-dimensional Dendritic Needle Network model with application to Al-Cu directional solidification experiments Citation Details In-Document Search Title: Three-dimensional Dendritic Needle Network model with application to Al-Cu directional solidification experiments We present a three-dimensional (3D) extension of a previously proposed multi-scale Dendritic Needle Network (DNN) approach for the
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
WVMinputs-outputs Permalink Gallery Sandia Labs releases wavelet variability model (WVM) Modeling, News, Photovoltaic, Solar Sandia Labs releases wavelet variability model (WVM) When a single solar photovoltaic (PV) module is in full sunlight, then is shaded by a cloud, and is back in full sunlight in a matter of seconds, a sharp dip then increase in power output will result. However, over an entire PV plant, clouds will often uncover some modules even as they cover others, [...] By Andrea
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
A rail tank car of the type used to transport crude oil across North America. Recent incidents have raised concerns about the safety of this practice, which the DOE-DOT-sponsored team is investigating. (photo credit: Harvey Henkelmann) Permalink Gallery Expansion of DOE-DOT Tight Oil Research Work Capabilities, Carbon Capture & Storage, Carbon Storage, Energy, Energy Assurance, Energy Assurance, Fuel Options, Infrastructure Assurance, Infrastructure Security, Modeling, Modeling, Modeling
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Monte Carlo modeling it was found that for noisy signals with a significant background component, accuracy is improved by fitting the total emission data which includes the...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
... Renewable Energy, Research & Capabilities, Wind Energy, Wind News|0 Comments Read More ... Energy, Research & Capabilities, Water Power Sandia Modifies Delft3D Turbine Model ...
A Nonlocal Peridynamic Plasticity Model for the Dynamic Flow and Fracture of Concrete.
Vogler, Tracy; Lammi, Christopher James
2014-10-01
A nonlocal, ordinary peridynamic constitutive model is formulated to numerically simulate the pressure-dependent flow and fracture of heterogeneous, quasi-brittle ma- terials, such as concrete. Classical mechanics and traditional computational modeling methods do not accurately model the distributed fracture observed within this family of materials. The peridynamic horizon, or range of influence, provides a characteristic length to the continuum and limits localization of fracture. Scaling laws are derived to relate the parameters of peridynamic constitutive model to the parameters of the classical Drucker-Prager plasticity model. Thermodynamic analysis of associated and non-associated plastic flow is performed. An implicit integration algorithm is formu- lated to calculate the accumulated plastic bond extension and force state. The gov- erning equations are linearized and the simulation of the quasi-static compression of a cylinder is compared to the classical theory. A dissipation-based peridynamic bond failure criteria is implemented to model fracture and the splitting of a concrete cylinder is numerically simulated. Finally, calculation of the impact and spallation of a con- crete structure is performed to assess the suitability of the material and failure models for simulating concrete during dynamic loadings. The peridynamic model is found to accurately simulate the inelastic deformation and fracture behavior of concrete during compression, splitting, and dynamically induced spall. The work expands the types of materials that can be modeled using peridynamics. A multi-scale methodology for simulating concrete to be used in conjunction with the plasticity model is presented. The work was funded by LDRD 158806.
Zachara, John M.; Bjornstad, Bruce N.; Christensen, John N.; Conrad, Mark S.; Fredrickson, Jim K.; Freshley, Mark D.; Haggerty, Roy; Hammond, Glenn E.; Kent, Douglas B.; Konopka, Allan; Lichtner, Peter C.; Liu, Chongxuan; McKinley, James P.; Murray, Christopher J.; Rockhold, Mark L.; Rubin, Yoram; Vermeul, Vincent R.; Versteeg, Roelof J.; Zheng, Chunmiao
2012-03-05
The Integrated Field Research Challenge (IFRC) at the Hanford Site 300 Area uranium (U) plume addresses multi-scale mass transfer processes in a complex subsurface biogeochemical setting where groundwater and riverwater interact. A series of forefront science questions on reactive mass transfer motivates research. These questions relate to the effect of spatial heterogeneities; the importance of scale; coupled interactions between biogeochemical, hydrologic, and mass transfer processes; and measurements and approaches needed to characterize and model a mass-transfer dominated biogeochemical system. The project was initiated in February 2007, with CY 2007, CY 2008, CY 2009, and CY 2010 progress summarized in preceding reports. A project peer review was held in March 2010, and the IFRC project acted upon all suggestions and recommendations made in consequence by reviewers and SBR/DOE. These responses have included the development of 'Modeling' and 'Well-Field Mitigation' plans that are now posted on the Hanford IFRC web-site, and modifications to the IFRC well-field completed in CY 2011. The site has 35 instrumented wells, and an extensive monitoring system. It includes a deep borehole for microbiologic and biogeochemical research that sampled the entire thickness of the unconfined 300 A aquifer. Significant, impactful progress has been made in CY 2011 including: (i) well modifications to eliminate well-bore flows, (ii) hydrologic testing of the modified well-field and upper aquifer, (iii) geophysical monitoring of winter precipitation infiltration through the U-contaminated vadose zone and spring river water intrusion to the IFRC, (iv) injection experimentation to probe the lower vadose zone and to evaluate the transport behavior of high U concentrations, (v) extended passive monitoring during the period of water table rise and fall, and (vi) collaborative down-hole experimentation with the PNNL SFA on the biogeochemistry of the 300 A Hanford-Ringold contact and the underlying redox transition zone. The modified well-field has functioned superbly without any evidence for well-bore flows. Beyond these experimental efforts, our site-wide reactive transport models (PFLOTRAN and eSTOMP) have been updated to include site geostatistical models of both hydrologic properties and adsorbed U distribution; and new hydrologic characterization measurements of the upper aquifer. These increasingly robust models are being used to simulate past and recent U desorption-adsorption experiments performed under different hydrologic conditions, and heuristic modeling to understand the complex functioning of the smear zone. We continued efforts to assimilate geophysical logging and 3D ERT characterization data into our site wide geophysical model, with significant and positive progress in 2011 that will enable publication in 2012. Our increasingly comprehensive field experimental results and robust reactive transport simulators, along with the field and laboratory characterization, are leading to a new conceptual model of U(VI) flow and transport in the IFRC footprint and the 300 Area in general, and insights on the microbiological community and associated biogeochemical processes influencing N, S, C, Mn, and Fe. Collectively these findings and higher scale models are providing a unique and unparalleled system-scale understanding of the biogeochemical function of the groundwater-river interaction zone.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Moller, Peter; Ichikawa, Takatoshi
2015-12-23
In this study, we propose a method to calculate the two-dimensional (2D) fission-fragment yield Y(Z,N) versus both proton and neutron number, with inclusion of odd-even staggering effects in both variables. The approach is to use the Brownian shape-motion on a macroscopic-microscopic potential-energy surface which, for a particular compound system is calculated versus four shape variables: elongation (quadrupole moment Q2), neck d, left nascent fragment spheroidal deformation ϵf1, right nascent fragment deformation ϵf2 and two asymmetry variables, namely proton and neutron numbers in each of the two fragments. The extension of previous models 1) introduces a method to calculate this generalizedmore » potential-energy function and 2) allows the correlated transfer of nucleon pairs in one step, in addition to sequential transfer. In the previous version the potential energy was calculated as a function of Z and N of the compound system and its shape, including the asymmetry of the shape. We outline here how to generalize the model from the “compound-system” model to a model where the emerging fragment proton and neutron numbers also enter, over and above the compound system composition.« less
Nelson, A. J.; Cooper, G. W. [Department of Chemical and Nuclear Engineering, University of New Mexico, Albuquerque, New Mexico 87131 (United States); Ruiz, C. L.; Chandler, G. A.; Fehl, D. L.; Hahn, K. D.; Leeper, R. J.; Smelser, R.; Torres, J. A. [Sandia National Laboratories, Albuquerque, New Mexico 87185-1196 (United States)
2012-10-15
A novel method for modeling the neutron time of flight (nTOF) detector response in current mode for inertial confinement fusion experiments has been applied to the on-axis nTOF detectors located in the basement of the Z-Facility. It will be shown that this method can identify sources of neutron scattering, and is useful for predicting detector responses in future experimental configurations, and for identifying potential sources of neutron scattering when experimental set-ups change. This method can also provide insight on how much broadening neutron scattering contributes to the primary signals, which is then subtracted from them. Detector time responses are deconvolved from the signals, allowing a transformation from dN/dt to dN/dE, extracting neutron spectra at each detector location; these spectra are proportional to the absolute yield.
Final Technical Report -- Bridging the PSI Knowledge Gap: A Multiscale Approach
Whyte, Dennis
2014-12-12
The Plasma Surface Interactions (PSI) Science Center formed by the grant undertook a multidisciplinary set of studies on the complex interface between the plasma and solid states of matter. The strategy of the center was to combine and integrate the experimental, diagnostic and modeling toolkits from multiple institutions towards specific PSI problems. In this way the Center could tackle integrated science issues which were not addressable by single institutions, as well as evolve the underlying science of the PSI in a more general way than just for fusion applications. The overall strategy proved very successful. The research result and highlights of the MIT portion of the Center are primarily described. A particular highlight is the study of tungsten nano-tendril growth in the presence of helium plasmas. The Center research provided valuable new insights to the mechanisms controlling the nano-tendrils by developing coupled modeling and in situ diagnostic methods which could be directly compared. For example, the role of helium accumulation in tungsten distortion in the surface was followed with unique in situ helium concentration diagnostics developed. These depth-profiled, time-resolved helium concentration measurements continue to challenge the numerical models of nano-tendrils. The Center team also combined its expertise on tungsten nano-tendrils to demonstrate for the first time the growth of the tendrils in a fusion environment on the Alcator C-Mod fusion experiment, thus having significant impact on the broader fusion research effort. A new form of isolated nano-tendril “columns” were identified which are now being used to understand the underlying mechanisms controlling the tendril growth. The Center also advanced PSI science on a broader front with a particular emphasis on developing a wide range of in situ PSI diagnostic tools at the DIONISOS facility at MIT. For example the strong suppression of sputtering by the certain combination of light-species plasmas and metals was experimentally studied with independent measurement methods across the Center. This surprising result challenges the universal use of the binary-collision approximation in sputtering predictions and continues to be the subject of study. In order to address this issue MIT developed a new in situ erosion measurement technique based on ion beam analysis which can be used at elevated material temperatures. This exciting new technique is now being used to study material erosion in high performance plasma thrusters for space exploration and is being adopted to fusion experimental devices. This is an indicator of the positive synergies that arise from such a Center, with the research having impact beyond the initial area of study. The Center also served successfully as an organizing force for communication to the science community. The MIT members of the Center provided many high-profile overview presentations at prestigious international conferences and national workshops. The research resulted in three student theses and 24 peer-reviewed publications. PSI research continues to be identified as a critical area for fusion energy.
Singledecker, Steven J.; Jones, Scotty W.; Dorries, Alison M.; Henckel, George; Gruetzmacher, Kathleen M. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)
2012-07-01
In the coming fiscal years of potentially declining budgets, Department of Energy facilities such as the Los Alamos National Laboratory (LANL) will be looking to reduce the cost of radioactive waste characterization, management, and disposal processes. At the core of this cost reduction process will be choosing the most cost effective, efficient, and accurate methods of radioactive waste characterization. Central to every radioactive waste management program is an effective and accurate waste characterization program. Choosing between methods can determine what is classified as low level radioactive waste (LLRW), transuranic waste (TRU), waste that can be disposed of under an Authorized Release Limit (ARL), industrial waste, and waste that can be disposed of in municipal landfills. The cost benefits of an accurate radioactive waste characterization program cannot be overstated. In addition, inaccurate radioactive waste characterization of radioactive waste can result in the incorrect classification of radioactive waste leading to higher disposal costs, Department of Transportation (DOT) violations, Notice of Violations (NOVs) from Federal and State regulatory agencies, waste rejection from disposal facilities, loss of operational capabilities, and loss of disposal options. Any one of these events could result in the program that mischaracterized the waste losing its ability to perform it primary operational mission. Generators that produce radioactive waste have four characterization strategies at their disposal: - Acceptable Knowledge/Process Knowledge (AK/PK); - Indirect characterization using a software application or other dose to curie methodologies; - Non-Destructive Analysis (NDA) tools such as gamma spectroscopy; - Direct sampling (e.g. grab samples or Surface Contaminated Object smears) and laboratory analytical; Each method has specific advantages and disadvantages. This paper will evaluate each method detailing those advantages and disadvantages including; - Cost benefit analysis (basic materials costs, overall program operations costs, man-hours per sample analyzed, etc.); - Radiation Exposure As Low As Reasonably Achievable (ALARA) program considerations; - Industrial Health and Safety risks; - Overall Analytical Confidence Level. The concepts in this paper apply to any organization with significant radioactive waste characterization and management activities working to within budget constraints and seeking to optimize their waste characterization strategies while reducing analytical costs. (authors)
Zachara, John M.; Bjornstad, Bruce N.; Christensen, John N.; Conrad, Mark S.; Fredrickson, Jim K.; Freshley, Mark D.; Haggerty, Roy; Hammond, Glenn E.; Kent, Douglas B.; Konopka, Allan; Lichtner, Peter C.; Liu, Chongxuan; McKinley, James P.; Murray, Christopher J.; Rockhold, Mark L.; Rubin, Yoram; Vermeul, Vincent R.; Versteeg, Roelof J.; Ward, Anderson L.; Zheng, Chunmiao
2011-02-01
The Integrated Field Research Challenge (IFRC) at the Hanford Site 300 Area uranium (U) plume addresses multi-scale mass transfer processes in a complex subsurface hydrogeologic setting where groundwater and riverwater interact. A series of forefront science questions on reactive mass transfer focus research. These questions relate to the effect of spatial heterogeneities; the importance of scale; coupled interactions between biogeochemical, hydrologic, and mass transfer processes; and measurements and approaches needed to characterize and model a mass-transfer dominated system. The project was initiated in February 2007, with CY 2007, CY 2008, and CY 2009 progress summarized in preceding reports. A project peer review was held in March 2010, and the IFRC project has responded to all suggestions and recommendations made in consequence by reviewers and SBR/DOE. These responses have included the development of Modeling and Well-Field Mitigation plans that are now posted on the Hanford IFRC web-site. The site has 35 instrumented wells, and an extensive monitoring system. It includes a deep borehole for microbiologic and biogeochemical research that sampled the entire thickness of the unconfined 300 A aquifer. Significant, impactful progress has been made in CY 2010 including the quantification of well-bore flows in the fully screened wells and the testing of means to mitigate them; the development of site geostatistical models of hydrologic and geochemical properties including the distribution of U; developing and parameterizing a reactive transport model of the smear zone that supplies contaminant U to the groundwater plume; performance of a second passive experiment of the spring water table rise and fall event with a associated multi-point tracer test; performance of downhole biogeochemical experiments where colonization substrates and discrete water and gas samplers were deployed to the lower aquifer zone; and modeling of past injection experiments for model parameterization, deconvolution of well-bore flow effects, system understanding, and publication. We continued efforts to assimilate geophysical logging and 3D ERT characterization data into our site wide geophysical model, and have now implemented a new strategy for this activity to bypass an approach that was found unworkable. An important focus of CY 2010 activities has been infrastructure modification to the IFRC site to eliminate vertical well bore flows in the fully screened wells. The mitigation procedure was carefully evaluated and is now being implementated. A new experimental campaign is planned for early spring 2011 that will utilize the modified well-field for a U reactive transport experiment in the upper aquifer zone. Preliminary geophysical monitoring experiments of rainwater recharge in the vadose zone have been initiated with promising results, and a controlled infiltration experiment to evaluate U mobilization from the vadose zone is now under planning for the September 2011. The increasingly comprehensive field experimental results, along with the field and laboratory characterization, are leading to a new conceptual model of U(VI) flow and transport in the IFRC footprint and the 300 Area in general, and insights on the microbiological community and associated biogeochemical processes.
Zachara, John M.; Bjornstad, Bruce N.; Christensen, John N.; Conrad, Mark E.; Fredrickson, Jim K.; Freshley, Mark D.; Haggerty, Roy; Hammon, Glenn; Kent, Douglas B.; Konopka, Allan; Lichtner, Peter C.; Liu, Chongxuan; McKinley, James P.; Murray, Christopher J.; Rockhold, Mark L.; Rubin, Yoram; Vermeul, Vincent R.; Versteeg, Roelof J.; Ward, Anderson L.; Zheng, Chunmiao
2010-02-01
The Integrated Field-Scale Subsurface Research Challenge (IFRC) at the Hanford Site 300 Area uranium (U) plume addresses multi-scale mass transfer processes in a complex hydrogeologic setting where groundwater and riverwater interact. A series of forefront science questions on mass transfer are posed for research which relate to the effect of spatial heterogeneities; the importance of scale; coupled interactions between biogeochemical, hydrologic, and mass transfer processes; and measurements and approaches needed to characterize and model a mass-transfer dominated system. The project was initiated in February 2007, with CY 2007 and CY 2008 progress summarized in preceding reports. The site has 35 instrumented wells, and an extensive monitoring system. It includes a deep borehole for microbiologic and biogeochemical research that sampled the entire thickness of the unconfined 300 A aquifer. Significant, impactful progress has been made in CY 2009 with completion of extensive laboratory measurements on field sediments, field hydrologic and geophysical characterization, four field experiments, and modeling. The laboratory characterization results are being subjected to geostatistical analyses to develop spatial heterogeneity models of U concentration and chemical, physical, and hydrologic properties needed for reactive transport modeling. The field experiments focused on: (1) physical characterization of the groundwater flow field during a period of stable hydrologic conditions in early spring, (2) comprehensive groundwater monitoring during spring to characterize the release of U(VI) from the lower vadose zone to the aquifer during water table rise and fall, (3) dynamic geophysical monitoring of salt-plume migration during summer, and (4) a U reactive tracer experiment (desorption) during the fall. Geophysical characterization of the well field was completed using the down-well Electrical Resistance Tomography (ERT) array, with results subjected to robust, geostatistically constrained inversion analyses. These measurements along with hydrologic characterization have yielded 3D distributions of hydraulic properties that have been incorporated into an updated and increasingly robust hydrologic model. Based on significant findings from the microbiologic characterization of deep borehole sediments in CY 2008, down-hole biogeochemistry studies were initiated where colonization substrates and spatially discrete water and gas samplers were deployed to select wells. The increasingly comprehensive field experimental results, along with the field and laboratory characterization, are leading to a new conceptual model of U(VI) flow and transport in the IFRC footprint and the 300 Area in general, and insights on the microbiological community and associated biogeochemical processes. A significant issue related to vertical flow in the IFRC wells was identified and evaluated during the spring and fall field experimental campaigns. Both upward and downward flows were observed in response to dynamic Columbia River stage. The vertical flows are caused by the interaction of pressure gradients with our heterogeneous hydraulic conductivity field. These impacts are being evaluated with additional modeling and field activities to facilitate interpretation and mitigation. The project moves into CY 2010 with ambitious plans for a drilling additional wells for the IFRC well field, additional experiments, and modeling. This research is part of the ERSP Hanford IFRC at Pacific Northwest National Laboratory.
Gettelman, Andrew
2015-10-01
In this project we have been upgrading the Multiscale Modeling Framework (MMF) in the Community Atmosphere Model (CAM), also known as Super-Parameterized CAM (SP-CAM). This has included a major effort to update the coding standards and interface with CAM so that it can be placed on the main development trunk. It has also included development of a new software structure for CAM to be able to handle sub-grid column information. These efforts have formed the major thrust of the work.
Bishop, R. F.; Li, P. H. Y.; Campbell, C. E.
2014-10-15
We outline how the coupled cluster method of microscopic quantum many-body theory can be utilized in practice to give highly accurate results for the ground-state properties of a wide variety of highly frustrated and strongly correlated spin-lattice models of interest in quantum magnetism, including their quantum phase transitions. The method itself is described, and it is shown how it may be implemented in practice to high orders in a systematically improvable hierarchy of (so-called LSUBm) approximations, by the use of computer-algebraic techniques. The method works from the outset in the thermodynamic limit of an infinite lattice at all levels of approximation, and it is shown both how the 'raw' LSUBm results are themselves generally excellent in the sense that they converge rapidly, and how they may accurately be extrapolated to the exact limit, m ? ?, of the truncation index m, which denotes the only approximation made. All of this is illustrated via a specific application to a two-dimensional, frustrated, spin-half J{sub 1}{sup XXZ}?J{sub 2}{sup XXZ} model on a honeycomb lattice with nearest-neighbor and next-nearest-neighbor interactions with exchange couplings J{sub 1} > 0 and J{sub 2} ? ?J{sub 1} > 0, respectively, where both interactions are of the same anisotropic XXZ type. We show how the method can be used to determine the entire zero-temperature ground-state phase diagram of the model in the range 0 ? ? ? 1 of the frustration parameter and 0 ? ? ? 1 of the spin-space anisotropy parameter. In particular, we identify a candidate quantum spin-liquid region in the phase space.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
NASA Earth at Night Video EC, Energy, Energy Efficiency, Global, Modeling, News & Events, Solid-State Lighting, Videos NASA Earth at Night Video Have you ever wondered what the ...
Broader source: Energy.gov [DOE]
Presentation given by National Renewable Energy Laboratory at 2015 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about...
Park, Sungsu
2015-11-29
Originally, the main role of the P.I. (Sungsu Park) in this project was to improve the treatment of cloud microphysics in the CAM5 shallow and deep convection scheme. During the progress of the project, however, the main research theme was changed to develop a new unified convection scheme (so called, UNICON) with the permission of the program manager.
Broader source: Energy.gov [DOE]
Presentation given by NREL at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about significant enhancement of computational...
Development of mpi_EPIC Model for Global Agroecosystem Modeling
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Kang, Shujiang; Wang, Dali; Nichols, Jeff A.; Schuchart, Joseph; Kline, Keith L; Wei, Yaxing; Ricciuto, Daniel M; Wullschleger, Stan D; Post, Wilfred M; Izaurralde, Dr. R. Cesar
2015-01-01
Models that address policy-maker concerns about multi-scale effects of food and bioenergy production systems are computationally demanding. We integrated the message passing interface algorithm into the process-based EPIC model to accelerate computation of ecosystem effects. Simulation performance was further enhanced by applying the Vampir framework. When this enhanced mpi_EPIC model was tested, total execution time for a global 30-year simulation of a switchgrass cropping system was shortened to less than 0.5 hours on a supercomputer. The results illustrate that mpi_EPIC using parallel design can balance simulation workloads and facilitate large-scale, high-resolution analysis of agricultural production systems, management alternatives and environmentalmore » effects.« less
Broader source: Energy.gov [DOE]
This document is a pre-publication Federal Register final rule regarding alternative efficiency determination methods, basic model definition, and compliance for commercial HVAC, refrigeration, and water heating equipment , as issued by the Deputy Assistant Secretary for Energy Efficiency on December 22, 2014. Though it is not intended or expected, should any discrepancy occur between the document posted here and the document published in the Federal Register, the Federal Register publication controls. This document is being made available through the Internet solely as a means to facilitate the public's access to this document.
Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro
2015-05-01
This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).
Wilkowski, Gery M.; Rudland, David L.; Shim, Do-Jun; Brust, Frederick W.; Babu, Sundarsanam
2008-06-30
The potential to save trillions of BTU’s in energy usage and billions of dollars in cost on an annual basis based on use of higher strength steel in major oil and gas transmission pipeline construction is a compelling opportunity recognized by both the US Department of Energy (DOE). The use of high-strength steels (X100) is expected to result in energy savings across the spectrum, from manufacturing the pipe to transportation and fabrication, including welding of line pipe. Elementary examples of energy savings include more the 25 trillion BTUs saved annually based on lower energy costs to produce the thinner-walled high-strength steel pipe, with the potential for the US part of the Alaskan pipeline alone saving more than 7 trillion BTU in production and much more in transportation and assembling. Annual production, maintenance and installation of just US domestic transmission pipeline is likely to save 5 to 10 times this amount based on current planned and anticipated expansions of oil and gas lines in North America. Among the most important conclusions from these studies were: • While computational weld models to predict residual stress and distortions are well-established and accurate, related microstructure models need improvement. • Fracture Initiation Transition Temperature (FITT) Master Curve properly predicts surface-cracked pipe brittle-to-ductile initiation temperature. It has value in developing Codes and Standards to better correlate full-scale behavior from either CTOD or Charpy test results with the proper temperature shifts from the FITT master curve method. • For stress-based flaw evaluation criteria, the new circumferentially cracked pipe limit-load solution in the 2007 API 1104 Appendix A approach is overly conservative by a factor of 4/π, which has additional implications. . • For strain-based design of girth weld defects, the hoop stress effect is the most significant parameter impacting CTOD-driving force and can increase the crack-driving force by a factor of 2 depending on strain-hardening, pressure level as a % of SMYS, and flaw size. • From years of experience in circumferential fracture analyses and experimentation, there has not been sufficient integration of work performed for other industries into analogous problems facing the oil and gas pipeline markets. Some very basic concepts and problems solved previously in these fields could have circumvented inconsistencies seen in the stress-based and strain-based analysis efforts. For example, in nuclear utility piping work, more detailed elastic-plastic fracture analyses were always validated in their ability to predict loads and displacements (stresses and strains). The eventual implementation of these methodologies will result in acceleration of the industry adoption of higher-strength line-pipe steels.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Sandia Co-Hosts "Climate Risk Forum: Bridging Climate Science and Actuarial Practice" This Fall event was a follow-up to a Climate and Environment Program Area meeting with the California governor's office in July. There, the California Insurance Commissioner, Dave Jones, recognized the value of Sandia's climate-impact modeling and analysis work, led by Stephen Conrad (manager of Sandia's Resilience and Regulatory Effects Dept.), and wanted to connect that [...] By
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
You are here Home » Systems Integration Systems Integration Hawaii DREAMS of New Solar Technologies Hawaii DREAMS of New Solar Technologies Read more Plug and Play Solar PV for American Homes Plug and Play Solar PV for American Homes Read more Watt-Sun: A Multi-Scale, Multi-Modal, Machine-Learning Solar Forecasting Technology Watt-Sun: A Multi-Scale, Multi-Modal, Machine-Learning Solar Forecasting Technology Read more High PV Penetration with Energy Storage in Flagstaff, AZ High PV Penetration
Ordinary Isotropic Peridynamic Models Position Aware Linear Solid (PALS) SAND2015-??????
Office of Scientific and Technical Information (OSTI)
Linear SO SAND2015-?????? SAND2015-1012PE John Mitchell Multiscale Science Computing Research Sandia National Laboratories Albuquerque, New Mexico Sandia National Laboratories is a multi-program laboratory managed and operated, by Sandia Corporation a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL8500. Feb 17, 2015 Maturation & extension of material models Bond-based
Collinson, Glyn A.; Dorelli, John C.; Moore, Thomas E.; Pollock, Craig; Mariano, Al; Shappirio, Mark D.; Adrian, Mark L.; Avanov, Levon A.; Lewis, Gethyn R.; Kataria, Dhiren O.; Bedington, Robert; Owen, Christopher J.; Walsh, Andrew P.; Arridge, Chris S.; Gliese, Ulrik; Barrie, Alexander C.; Tucker, Corey
2012-03-15
We report our findings comparing the geometric factor (GF) as determined from simulations and laboratory measurements of the new Dual Electron Spectrometer (DES) being developed at NASA Goddard Space Flight Center as part of the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission. Particle simulations are increasingly playing an essential role in the design and calibration of electrostatic analyzers, facilitating the identification and mitigation of the many sources of systematic error present in laboratory calibration. While equations for laboratory measurement of the GF have been described in the literature, these are not directly applicable to simulation since the two are carried out under substantially different assumptions and conditions, making direct comparison very challenging. Starting from first principles, we derive generalized expressions for the determination of the GF in simulation and laboratory, and discuss how we have estimated errors in both cases. Finally, we apply these equations to the new DES instrument and show that the results agree within errors. Thus we show that the techniques presented here will produce consistent results between laboratory and simulation, and present the first description of the performance of the new DES instrument in the literature.
Anovitz, Lawrence M.; Cole, David R.; Jackson, Andrew J.; Rother, Gernot; Littrell, Kenneth C.; Allard, Lawrence F.; Pollington, Anthony D.; Wesolowski, David J.
2015-06-01
We have performed a series of experiments to understand the effects of quartz overgrowths on nanometer to centimeter scale pore structures of sandstones. Blocks from two samples of St. Peter Sandstone with different initial porosities (5.8 and 18.3%) were reacted from 3 days to 7.5 months at 100 and 200 °C in aqueous solutions supersaturated with respect to quartz by reaction with amorphous silica. Porosity in the resultant samples was analyzed using small and ultrasmall angle neutron scattering and scanning electron microscope/backscattered electron (SEM/BSE)-based image-scale processing techniques.Significant changes were observed in the multiscale pore structures. By three days much of the overgrowth in the low-porosity sample dissolved away. The reason for this is uncertain, but the overgrowths can be clearly distinguished from the original core grains in the BSE images. At longer times the larger pores are observed to fill with plate-like precipitates. As with the unreacted sandstones, porosity is a step function of size. Grain boundaries are typically fractal, but no evidence of mass fractal or fuzzy interface behavior was observed suggesting a structural difference between chemical and clastic sediments. After the initial loss of the overgrowths, image scale porosity (>~1 cm) decreases with time. Submicron porosity (typically ~25% of the total) is relatively constant or slightly decreasing in absolute terms, but the percent change is significant. Fractal dimensions decrease at larger scales, and increase at smaller scales with increased precipitation.
V. Chipman
2002-10-05
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. The purposes of Revision 01 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post-closure thermal models (Section 6.6). (3) To satisfy the remainder of KTI agreement TEF 2.07 (Reamer and Williams 2001b). Specifically to provide the results of post-test ANSYS modeling of the Atlas Facility forced convection tests (Section 7.1.2). This portion of the model report also serves as a validation exercise per AP-SIII.10Q, Models, for the ANSYS ventilation model. (4) To further satisfy KTI agreements RDTME 3.01 and 3.14 (Reamer and Williams 2001a) by providing the source documentation referred to in the KTI Letter Report, ''Effect of Forced Ventilation on Thermal-Hydrologic Conditions in the Engineered Barrier System and Near Field Environment'' (Williams 2002). Specifically to provide the results of the MULTIFLUX model which simulates the coupled processes of heat and mass transfer in and around waste emplacement drifts during periods of forced ventilation. This portion of the model report is presented as an Alternative Conceptual Model with a numerical application, and also provides corroborative results used for model validation purposes (Section 6.3 and 6.4).
Loth, E.; Tryggvason, G.; Tsuji, Y.; Elghobashi, S. E.; Crowe, Clayton T.; Berlemont, A.; Reeks, M.; Simonin, O.; Frank, Th; Onishi, Yasuo; Van Wachem, B.
2005-09-01
Slurry flows occur in many circumstances, including chemical manufacturing processes, pipeline transfer of coal, sand, and minerals; mud flows; and disposal of dredged materials. In this section we discuss slurry flow applications related to radioactive waste management. The Hanford tank waste solids and interstitial liquids will be mixed to form a slurry so it can be pumped out for retrieval and treatment. The waste is very complex chemically and physically. The ARIEL code is used to model the chemical interactions and fluid dynamics of the waste.
Kohut, Sviataslau V.; Staroverov, Viktor N.; Ryabinkin, Ilya G.
2014-05-14
We describe a method for constructing a hierarchy of model potentials approximating the functional derivative of a given orbital-dependent exchange-correlation functional with respect to electron density. Each model is derived by assuming a particular relationship between the self-consistent solutions of KohnSham (KS) and generalized KohnSham (GKS) equations for the same functional. In the KS scheme, the functional is differentiated with respect to density, in the GKS schemewith respect to orbitals. The lowest-level approximation is the orbital-averaged effective potential (OAEP) built with the GKS orbitals. The second-level approximation, termed the orbital-consistent effective potential (OCEP), is based on the assumption that the KS and GKS orbitals are the same. It has the form of the OAEP plus a correction term. The highest-level approximation is the density-consistent effective potential (DCEP), derived under the assumption that the KS and GKS electron densities are equal. The analytic expression for a DCEP is the OCEP formula augmented with kinetic-energy-density-dependent terms. In the case of exact-exchange functional, the OAEP is the Slater potential, the OCEP is roughly equivalent to the localized HartreeFock approximation and related models, and the DCEP is practically indistinguishable from the true optimized effective potential for exact exchange. All three levels of the proposed hierarchy require solutions of the GKS equations as input and have the same affordable computational cost.
Continuum-kinetic-microscopic model of lung clearance due to core-annular fluid entrainment
Mitran, Sorin
2013-07-01
The human lung is protected against aspirated infectious and toxic agents by a thin liquid layer lining the interior of the airways. This airway surface liquid is a bilayer composed of a viscoelastic mucus layer supported by a fluid film known as the periciliary liquid. The viscoelastic behavior of the mucus layer is principally due to long-chain polymers known as mucins. The airway surface liquid is cleared from the lung by ciliary transport, surface tension gradients, and airflow shear forces. This work presents a multiscale model of the effect of airflow shear forces, as exerted by tidal breathing and cough, upon clearance. The composition of the mucus layer is complex and variable in time. To avoid the restrictions imposed by adopting a viscoelastic flow model of limited validity, a multiscale computational model is introduced in which the continuum-level properties of the airway surface liquid are determined by microscopic simulation of long-chain polymers. A bridge between microscopic and continuum levels is constructed through a kinetic-level probability density function describing polymer chain configurations. The overall multiscale framework is especially suited to biological problems due to the flexibility afforded in specifying microscopic constituents, and examining the effects of various constituents upon overall mucus transport at the continuum scale.
Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.
2001-01-16
Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of time magnitudes.
Jung, Jae Won; Kim, Jong Oh; Yeo, Inhwan Jason; Cho, Young-Bin; Kim, Sun Mo; DiBiase, Steven
2012-12-15
Purpose: Fast and accurate transit portal dosimetry was investigated by developing a density-scaled layer model of electronic portal imaging device (EPID) and applying it to a clinical environment. Methods: The model was developed for fast Monte Carlo dose calculation. The model was validated through comparison with measurements of dose on EPID using first open beams of varying field sizes under a 20-cm-thick flat phantom. After this basic validation, the model was further tested by applying it to transit dosimetry and dose reconstruction that employed our predetermined dose-response-based algorithm developed earlier. The application employed clinical intensity-modulated beams irradiated on a Rando phantom. The clinical beams were obtained through planning on pelvic regions of the Rando phantom simulating prostate and large pelvis intensity modulated radiation therapy. To enhance agreement between calculations and measurements of dose near penumbral regions, convolution conversion of acquired EPID images was alternatively used. In addition, thickness-dependent image-to-dose calibration factors were generated through measurements of image and calculations of dose in EPID through flat phantoms of various thicknesses. The factors were used to convert acquired images in EPID into dose. Results: For open beam measurements, the model showed agreement with measurements in dose difference better than 2% across open fields. For tests with a Rando phantom, the transit dosimetry measurements were compared with forwardly calculated doses in EPID showing gamma pass rates between 90.8% and 98.8% given 4.5 mm distance-to-agreement (DTA) and 3% dose difference (DD) for all individual beams tried in this study. The reconstructed dose in the phantom was compared with forwardly calculated doses showing pass rates between 93.3% and 100% in isocentric perpendicular planes to the beam direction given 3 mm DTA and 3% DD for all beams. On isocentric axial planes, the pass rates varied between 95.8% and 99.9% for all individual beams and they were 98.2% and 99.9% for the composite beams of the small and large pelvis cases, respectively. Three-dimensional gamma pass rates were 99.0% and 96.4% for the small and large pelvis cases, respectively. Conclusions: The layer model of EPID built for Monte Carlo calculations offered fast (less than 1 min) and accurate calculation for transit dosimety and dose reconstruction.
Multiscale simulation of xenon diffusion and grain boundary segregation in UO₂
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Andersson, David A.; Tonks, Michael R.; Casillas, Luis; Vyas, Shyam; Nerikar, Pankaj; Uberuaga, Blas P.; Stanek, Christopher R.
2015-07-01
In light water reactor fuel, gaseous fission products segregate to grain boundaries, resulting in the nucleation and growth of large intergranular fission gas bubbles. The segregation rate is controlled by diffusion of fission gas atoms through the grains and interaction with the boundaries. Based on the mechanisms established from earlier density functional theory (DFT) and empirical potential calculations, diffusion models for xenon (Xe), uranium (U) vacancies and U interstitials in UO₂ have been derived for both intrinsic (no irradiation) and irradiation conditions. Segregation of Xe to grain boundaries is described by combining the bulk diffusion model with a model formore » the interaction between Xe atoms and three different grain boundaries in UO₂ (Σ5 tilt, Σ5 twist and a high angle random boundary), as derived from atomistic calculations. The present model does not attempt to capture nucleation or growth of fission gas bubbles at the grain boundaries. The point defect and Xe diffusion and segregation models are implemented in the MARMOT phase field code, which is used to calculate effective Xe and U diffusivities as well as to simulate Xe redistribution for a few simple microstructures.« less
Henson, Kriste M; Gou; ias, Konstadinos G
2010-11-30
The ability to transfer national travel patterns to a local population is of interest when attempting to model megaregions or areas that exceed metropolitan planning organization (MPO) boundaries. At the core of this research are questions about the connection between travel behavior and land use, urban form, and accessibility. As a part of this process, a group of land use variables have been identified to define activity and travel patterns for individuals and households. The 2001 National Household Travel Survey (NHTS) participants are divided into categories comprised of a set of latent cluster models representing persons, travel, and land use. These are compared to two sets of cluster models constructed for two local travel surveys. Comparison of means statistical tests are used to assess differences among sociodemographic groups residing in localities with similar land uses. The results show that the NHTS and the local surveys share mean population activity and travel characteristics. However, these similarities mask behavioral heterogeneity that are shown when distributions of activity and travel behavior are examined. Therefore, data from a national household travel survey cannot be used to model local population travel characteristics if the goal to model the actual distributions and not mean travel behavior characteristics.
Khachatryan, V.
2015-06-09
A search for a standard model Higgs boson produced in association with a top-quark pair and decaying to bottom quarks is presented. Events with hadronic jets and one or two oppositely charged leptons are selected from a data sample corresponding to an integrated luminosity of 19.5fb^{-1} collected by the CMS experiment at the LHC in pp collisions at a centre-of-mass energy of 8TeV. In order to separate the signal from the larger tt + jets background, this analysis uses a matrix element method that assigns a probability density value to each reconstructed event under signal or background hypotheses. The ratio between the two values is used in a maximum likelihood fit to extract the signal yield. The results are presented in terms of the measured signal strength modifier, ?, relative to the standard model prediction for a Higgs boson mass of 125GeV. The observed (expected) exclusion limit at a 95 % confidence level is ?<4.2 (3.3), corresponding to a best fit value ?^=1.2^{+1.6}_{-1.5}.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khachatryan, Vardan
2015-06-09
A search for a standard model Higgs boson produced in association with a top-quark pair and decaying to bottom quarks is presented. Events with hadronic jets and one or two oppositely charged leptons are selected from a data sample corresponding to an integrated luminosity of 19.5fb-1 collected by the CMS experiment at the LHC in pp collisions at a centre-of-mass energy of 8TeV. In order to separate the signal from the larger tt¯ + jets background, this analysis uses a matrix element method that assigns a probability density value to each reconstructed event under signal or background hypotheses. The ratiomore » between the two values is used in a maximum likelihood fit to extract the signal yield. The results are presented in terms of the measured signal strength modifier, μ, relative to the standard model prediction for a Higgs boson mass of 125GeV. The observed (expected) exclusion limit at a 95 % confidence level is μ < 4.2 (3.3), corresponding to a best fit value μ^ = 1.2+1.6-1.5.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khachatryan, V.
2015-06-09
A search for a standard model Higgs boson produced in association with a top-quark pair and decaying to bottom quarks is presented. Events with hadronic jets and one or two oppositely charged leptons are selected from a data sample corresponding to an integrated luminosity of 19.5fb-1 collected by the CMS experiment at the LHC in pp collisions at a centre-of-mass energy of 8TeV. In order to separate the signal from the larger tt + jets background, this analysis uses a matrix element method that assigns a probability density value to each reconstructed event under signal or background hypotheses. The ratiomorebetween the two values is used in a maximum likelihood fit to extract the signal yield. The results are presented in terms of the measured signal strength modifier, ?, relative to the standard model prediction for a Higgs boson mass of 125GeV. The observed (expected) exclusion limit at a 95 % confidence level is ?+1.6-1.5.less
Multi-scale and Multi-phase deformation of crystalline materials
Energy Science and Technology Software Center (OSTI)
2007-12-01
The MDEF package contains capabilities ofr modeling the deformation of materials at the crystal scale. Primary code capabilities are: xoth "strength" and "equation of state" aspects of material response, post-processing utilities, utilities for comparing results with data from diffraction experiments.
Localized Scale Coupling and New Educational Paradigms in Multiscale Mathematics and Science
Ingber, Marc; Vorobieff, Peter
2014-03-14
We have experimentally demonstrated how microscale phenomena affect suspended particle behavior on the mesoscale, and how particle group behavior on the mesoscale influences the macroscale suspension behavior. Semi-analytical and numerical methods to treat flows on different scales have been developed, and a framework to combine these scale-dependent treatment has been described.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Anovitz, Lawrence M.; Cole, David R.; Jackson, Andrew J.; Rother, Gernot; Littrell, Kenneth C.; Allard, Lawrence F.; Pollington, Anthony D.; Wesolowski, David J.
2015-06-01
We have performed a series of experiments to understand the effects of quartz overgrowths on nanometer to centimeter scale pore structures of sandstones. Blocks from two samples of St. Peter Sandstone with different initial porosities (5.8 and 18.3%) were reacted from 3 days to 7.5 months at 100 and 200 °C in aqueous solutions supersaturated with respect to quartz by reaction with amorphous silica. Porosity in the resultant samples was analyzed using small and ultrasmall angle neutron scattering and scanning electron microscope/backscattered electron (SEM/BSE)-based image-scale processing techniques.Significant changes were observed in the multiscale pore structures. By three days much ofmore » the overgrowth in the low-porosity sample dissolved away. The reason for this is uncertain, but the overgrowths can be clearly distinguished from the original core grains in the BSE images. At longer times the larger pores are observed to fill with plate-like precipitates. As with the unreacted sandstones, porosity is a step function of size. Grain boundaries are typically fractal, but no evidence of mass fractal or fuzzy interface behavior was observed suggesting a structural difference between chemical and clastic sediments. After the initial loss of the overgrowths, image scale porosity (>~1 cm) decreases with time. Submicron porosity (typically ~25% of the total) is relatively constant or slightly decreasing in absolute terms, but the percent change is significant. Fractal dimensions decrease at larger scales, and increase at smaller scales with increased precipitation.« less
Miner, Nadine E.; Caudell, Thomas P.
2004-06-08
A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.
Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec; Peng, Ivy Bo; Laure, Erwin; Markidis, Stefano
2015-06-01
A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instability test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.
Accelerated Cartesian expansions for the rapid solution of periodic multiscale problems
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Baczewski, Andrew David; Dault, Daniel L.; Shanker, Balasubramaniam
2012-07-03
We present an algorithm for the fast and efficient solution of integral equations that arise in the analysis of scattering from periodic arrays of PEC objects, such as multiband frequency selective surfaces (FSS) or metamaterial structures. Our approach relies upon the method of Accelerated Cartesian Expansions (ACE) to rapidly evaluate the requisite potential integrals. ACE is analogous to FMM in that it can be used to accelerate the matrix vector product used in the solution of systems discretized using MoM. Here, ACE provides linear scaling in both CPU time and memory. Details regarding the implementation of this method within themore » context of periodic systems are provided, as well as results that establish error convergence and scalability. In addition, we also demonstrate the applicability of this algorithm by studying several exemplary electrically dense systems.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec; Peng, Ivy Bo; Laure, Erwin; Markidis, Stefano
2015-06-01
A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instabilitymore » test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.« less
Multiscale Speciation of U and Pu at Chernobyl, Hanford, Los Alamos,
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Multiplex automated genome engineering Citation Details In-Document Search Title: Multiplex automated genome engineering The present invention relates to automated methods of introducing multiple nucleic acid sequences into one or more target cells. Authors: Church, George M ; Wang, Harris H ; Isaacs, Farren J Publication Date: 2013-10-29 OSTI Identifier: 1107638 Report Number(s): 8,569,041 13/411,712 DOE Contract Number: FG02-02ER63445 Resource Type: Patent Research Org: Harvard University,
Yortsos, Yanis C.
2002-10-08
In this report, the thrust areas include the following: Internal drives, vapor-liquid flows, combustion and reaction processes, fluid displacements and the effect of instabilities and heterogeneities and the flow of fluids with yield stress. These find respective applications in foamy oils, the evolution of dissolved gas, internal steam drives, the mechanics of concurrent and countercurrent vapor-liquid flows, associated with thermal methods and steam injection, such as SAGD, the in-situ combustion, the upscaling of displacements in heterogeneous media and the flow of foams, Bingham plastics and heavy oils in porous media and the development of wormholes during cold production.
Unwin, Stephen D.; Sadovsky, Artyom; Sullivan, E. C.; Anderson, Richard M.
2011-09-30
This white paper accompanies a demonstration model that implements methods for the risk-informed design of monitoring, verification and accounting (RI-MVA) systems in geologic carbon sequestration projects. The intent is that this model will ultimately be integrated with, or interfaced with, the National Risk Assessment Partnership (NRAP) integrated assessment model (IAM). The RI-MVA methods described here apply optimization techniques in the analytical environment of NRAP risk profiles to allow systematic identification and comparison of the risk and cost attributes of MVA design options.
Michael Tonks; Derek Gaston; Cody Permann; Paul Millett; Glen Hansen; Chris Newman
2009-08-01
Reactor fuel performance is sensitive to microstructure changes during irradiation (such as fission gas and pore formation). This study proposes an approach to capture microstructural changes in the fuel by a two-way coupling of a mesoscale phase field irradiation model to an engineering scale, finite element calculation. This work solves the multiphysics equation system at the engineering-scale in a parallel, fully-coupled, fully-implicit manner using a preconditioned Jacobian-free Newton Krylov method (JFNK). A sampling of the temperature at the Gauss points of the coarse scale is passed to a parallel sequence of mesoscale calculations within the JFNK function evaluation phase of the calculation. The mesoscale thermal conductivity is calculated in parallel, and the result is passed back to the engineering-scale calculation. As this algorithm is fully contained within the JFNK function evaluation, the mesoscale calculation is nonlinearly consistent with the engineering-scale calculation. Further, the action of the Jacobian is also consistent, so the composite algorithm provides the strong nonlinear convergence properties of Newton's method. The coupled model using INL's \\bison\\ code demonstrates quadratic nonlinear convergence and good parallel scalability. Initial results predict the formation of large pores in the hotter center of the pellet, but few pores on the outer circumference. Thus, the thermal conductivity is is reduced in the center of the pellet, leading to a higher internal temperature than that in an unirradiated pellet.
Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; Gur, Sourav; Danielson, Thomas L.; Hin, Celine N.; Pannala, Sreekanth; Frantziskonis, George N.
2016-01-28
We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less
Michael V. Glazoff
2014-10-01
In the post-Fukushima world, the stability of materials under extreme conditions is an important issue for the safety of nuclear reactors. Because the nuclear industry is going to continue using advanced zirconium cladding materials in the foreseeable future, it become critical to gain fundamental understanding of the several interconnected problems. First, what are the thermodynamic and kinetic factors affecting the oxidation and hydrogen pick-up by these materials at normal, off-normal conditions, and in long-term storage? Secondly, what protective coatings (if any) could be used in order to gain extremely valuable time at off-normal conditions, e.g., when temperature exceeds the critical value of 2200°F? Thirdly, the kinetics of oxidation of such protective coating or braiding needs to be quantified. Lastly, even if some degree of success is achieved along this path, it is absolutely critical to have automated inspection algorithms allowing identifying defects of cladding as soon as possible. This work strives to explore these interconnected factors from the most advanced computational perspective, utilizing such modern techniques as first-principles atomistic simulations, computational thermodynamics of materials, diffusion modeling, and the morphological algorithms of image processing for defect identification. Consequently, it consists of the four parts dealing with these four problem areas preceded by the introduction and formulation of the studied problems. In the 1st part an effort was made to employ computational thermodynamics and ab initio calculations to shed light upon the different stages of oxidation of ziraloys (2 and 4), the role of microstructure optimization in increasing their thermal stability, and the process of hydrogen pick-up, both in normal working conditions and in long-term storage. The 2nd part deals with the need to understand the influence and respective roles of the two different plasticity mechanisms in Zr nuclear alloys: twinning (at low T) and crystallographic slip (higher T’s). For that goal, a description of the advanced plasticity model is outlined featuring the non-associated flow rule in hcp materials including Zr. The 3rd part describes the kinetic theory of oxidation of the several materials considered to be perspective coating materials for Zr alloys: SiC and ZrSiO4. In the 4th part novel and advanced projectional algorithms for defect identification in zircaloy coatings are described. In so doing, the author capitalized on some 12 years of his applied industrial research in this area. Our conclusions and recommendations are presented in the 5th part of this work, along with the list of used literature and the scripts for atomistic, thermodynamic, kinetic, and morphological computations.
Final Report: Geoelectrical Measurement of Multi-Scale Mass Transfer Parameters
Haggerty, Roy; Day-Lewis, Fred; Singha, Kamini; Johnson, Timothy; Binley, Andrew; Lane, John
2014-03-20
Mass transfer affects contaminant transport and is thought to control the efficiency of aquifer remediation at a number of sites within the Department of Energy (DOE) complex. An improved understanding of mass transfer is critical to meeting the enormous scientific and engineering challenges currently facing DOE. Informed design of site remedies and long-term stewardship of radionuclide-contaminated sites will require new cost-effective laboratory and field techniques to measure the parameters controlling mass transfer spatially and across a range of scales. In this project, we sought to capitalize on the geophysical signatures of mass transfer. Previous numerical modeling and pilot-scale field experiments suggested that mass transfer produces a geoelectrical signature—a hysteretic relation between sampled (mobile-domain) fluid conductivity and bulk (mobile + immobile) conductivity—over a range of scales relevant to aquifer remediation. In this work, we investigated the geoelectrical signature of mass transfer during tracer transport in a series of controlled experiments to determine the operation of controlling parameters, and also investigated the use of complex-resistivity (CR) as a means of quantifying mass transfer parameters in situ without tracer experiments. In an add-on component to our grant, we additionally considered nuclear magnetic resonance (NMR) to help parse mobile from immobile porosities. Including the NMR component, our revised study objectives were to: 1. Develop and demonstrate geophysical approaches to measure mass-transfer parameters spatially and over a range of scales, including the combination of electrical resistivity monitoring, tracer tests, complex resistivity, nuclear magnetic resonance, and materials characterization; and 2. Provide mass-transfer estimates for improved understanding of contaminant fate and transport at DOE sites, such as uranium transport at the Hanford 300 Area. To achieve our objectives, we implemented a 3-part research plan involving (1) development of computer codes and techniques to estimate mass-transfer parameters from time-lapse electrical data; (2) bench-scale experiments on synthetic materials and materials from cores from the Hanford 300 Area; and (3) field demonstration experiments at the DOE’s Hanford 300 Area. In a synergistic add-on to our workplan, we analyzed data from field experiments performed at the DOE Naturita Site under a separate DOE SBR grant, on which PI Day-Lewis served as co-PI. Techniques developed for application to Hanford datasets also were applied to data from Naturita.
Smith, Jovanca J.; Bishop, Joseph E.
2013-11-01
This report summarizes the work performed by the graduate student Jovanca Smith during a summer internship in the summer of 2012 with the aid of mentor Joe Bishop. The projects were a two-part endeavor that focused on the use of the numerical model called the Lattice Discrete Particle Model (LDPM). The LDPM is a discrete meso-scale model currently used at Northwestern University and the ERDC to model the heterogeneous quasi-brittle material, concrete. In the first part of the project, LDPM was compared to the Karagozian and Case Concrete Model (K&C) used in Presto, an explicit dynamics finite-element code, developed at Sandia National Laboratories. In order to make this comparison, a series of quasi-static numerical experiments were performed, namely unconfined uniaxial compression tests on four varied cube specimen sizes, three-point bending notched experiments on three proportional specimen sizes, and six triaxial compression tests on a cylindrical specimen. The second part of this project focused on the application of LDPM to simulate projectile perforation on an ultra high performance concrete called CORTUF. This application illustrates the strengths of LDPM over traditional continuum models.
V. Chipman; J. Case
2002-12-20
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. Revision 01 ICN 01 included the results of the unqualified software code MULTIFLUX to assess the influence of moisture on the ventilation efficiency. The purposes of Revision 02 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post-closure thermal models (Section 6.6). (3) To satisfy the remainder of KTI agreement TEF 2.07 (Reamer and Williams 2001b). Specifically to provide the results of post-test ANSYS modeling of the Atlas Facility forced convection tests (Section 7.1.2). This portion of the model report also serves as a validation exercise per AP-SIII.10Q, Models, for the ANSYS ventilation model. (4) To asses the impacts of moisture on the ventilation efficiency.
Li, Dongsheng; Zbib, Hussein M.; Garmestani, Hamid; Sun, Xin; Khaleel, Mohammad A.
2011-07-01
Stainless steels based on Fe-Cr-Ni alloys are the most popular structural materials used in reactors. High energy particle irradiation of in this kind of polycrystalline structural materials usually produces irradiation hardening and embrittlement. The development of predictive capability for the influence of irradiation on mechanical behavior is very important in materials design for next-generation reactors. Irradiation hardening is related to structural information crossing different length scale, such as composition, dislocation, crystal orientation distribution and so on. To predict the effective hardening, the influence factors along different length scales should be considered. A multiscale approach was implemented in this work to predict irradiation hardening of iron based structural materials. Three length scales are involved in this multiscale model: nanometer, micrometer and millimeter. In the microscale, molecular dynamics (MD) was utilized to predict on the edge dislocation mobility in body centered cubic (bcc) Fe and its Ni and Cr alloys. On the mesoscale, dislocation dynamics (DD) models were used to predict the critical resolved shear stress from the evolution of local dislocation and defects. In the macroscale, a viscoplastic self-consistent (VPSC) model was applied to predict the irradiation hardening in samples with changes in texture. The effects of defect density and texture were investigated. Simulated evolution of yield strength with irradiation agrees well with the experimental data of irradiation strengthening of stainless steel 304L, 316L and T91. This multiscale model we developed in this project can provide a guidance tool in performance evaluation of structural materials for next-generation nuclear reactors. Combining with other tools developed in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the models developed will have more impact in improving the reliability of current reactors and affordability of new reactors.
Energy Science and Technology Software Center (OSTI)
2014-06-25
PIKA is a MOOSE-based application for modeling micro-structure evolution of seasonal snow. The model will be useful for environmental, atmospheric, and climate scientists. Possible applications include application to energy balance models, ice sheet modeling, and avalanche forecasting. The model implements physics from published, peer-reviewed articles. The main purpose is to foster university and laboratory collaboration to build a larger multi-scale snow model using MOOSE. The main feature of the code is that it is implementedmore » using the MOOSE framework, thus making features such as multiphysics coupling, adaptive mesh refinement, and parallel scalability native to the application. PIKA implements three equations: the phase-field equation for tracking the evolution of the ice-air interface within seasonal snow at the grain-scale; the heat equation for computing the temperature of both the ice and air within the snow; and the mass transport equation for monitoring the diffusion of water vapor in the pore space of the snow.« less
Please join us for a triple-header seminar organized around Modeling RNA
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
and Protein/RNA Complexes | Stanford Synchrotron Radiation Lightsource Please join us for a triple-header seminar organized around Modeling RNA and Protein/RNA Complexes Tuesday, November 13, 2012 - 11:15am SSRL, Bldg. 137-322 Speakers: Julie Bernauer, Debanu Das & Dimitar Pachov Program Description: 11:15-11:45 Julie Bernauer (INRIA AMIB Bioinfo) Multi-scale modeling for RNA structures: a challenge 11:45-12:00 Debanu Das (SSRL JSCG) Progress on HT-SB of Protein/Nucleic Acid complexes at
Pore-Scale and Multiscale Numerical Simulation of Flow and Transport in a Laboratory-Scale Column
Scheibe, Timothy D.; Perkins, William A.; Richmond, Marshall C.; McKinley, Matthey I.; Romero Gomez, Pedro DJ; Oostrom, Martinus; Wietsma, Thomas W.; Serkowski, John A.; Zachara, John M.
2015-02-01
Pore-scale models are useful for studying relationships between fundamental processes and phenomena at larger (i.e., Darcy) scales. However, the size of domains that can be simulated with explicit pore-scale resolution is limited by computational and observational constraints. Direct numerical simulation of pore-scale flow and transport is typically performed on millimeter-scale volumes at which X-ray computed tomography (XCT), often used to characterize pore geometry, can achieve micrometer resolution. In contrast, the scale at which a continuum approximation of a porous medium is valid is usually larger, on the order of centimeters to decimeters. Furthermore, laboratory experiments that measure continuum properties are typically performed on decimeter-scale columns. At this scale, XCT resolution is coarse (tens to hundreds of micrometers) and prohibits characterization of small pores and grains. We performed simulations of pore-scale processes over a decimeter-scale volume of natural porous media with a wide range of grain sizes, and compared to results of column experiments using the same sample. Simulations were conducted using high-performance codes executed on a supercomputer. Two approaches to XCT image segmentation were evaluated, a binary (pores and solids) segmentation and a ternary segmentation that resolved a third category (porous solids with pores smaller than the imaged resolution). We used a mixed Stokes-Darcy simulation method to simulate the combination of Stokes flow in large open pores and Darcy-like flow in porous solid regions. Simulations based on the ternary segmentation provided results that were consistent with experimental observations, demonstrating our ability to successfully model pore-scale flow over a column-scale domain.
Local timespace mesh refinement for simulation of elastic wave propagation in multi-scale media
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-15
This paper presents an original approach to local timespace grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are the application of temporal and spatial refinement on two different surfaces; the use of the embedded-stencil technique for the refinement of grid step with respect to time; the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
Nelson, Alan J.; Cooper, Gary Wayne; Ruiz, Carlos L.; Chandler, Gordon Andrew; Fehl, David Lee; Hahn, Kelly Denise; Leeper, Ramon Joe; Smelser, Ruth Marie; Torres, Jose A.
2013-09-01
There are several machines in this country that produce short bursts of neutrons for various applications. A few examples are the Zmachine, operated by Sandia National Laboratories in Albuquerque, NM; the OMEGA Laser Facility at the University of Rochester in Rochester, NY; and the National Ignition Facility (NIF) operated by the Department of Energy at Lawrence Livermore National Laboratory in Livermore, California. They all incorporate neutron time of flight (nTOF) detectors which measure neutron yield, and the shapes of the waveforms from these detectors contain germane information about the plasma conditions that produce the neutrons. However, the signals can also be %E2%80%9Cclouded%E2%80%9D by a certain fraction of neutrons that scatter off structural components and also arrive at the detectors, thereby making analysis of the plasma conditions more difficult. These detectors operate in current mode - i.e., they have no discrimination, and all the photomultiplier anode charges are integrated rather than counted individually as they are in single event counting. Up to now, there has not been a method for modeling an nTOF detector operating in current mode. MCNPPoliMiwas developed in 2002 to simulate neutron and gammaray detection in a plastic scintillator, which produces a collision data output table about each neutron and photon interaction occurring within the scintillator; however, the postprocessing code which accompanies MCNPPoliMi assumes a detector operating in singleevent counting mode and not current mode. Therefore, the idea for this work had been born: could a new postprocessing code be written to simulate an nTOF detector operating in current mode? And if so, could this process be used to address such issues as the impact of neutron scattering on the primary signal? Also, could it possibly even identify sources of scattering (i.e., structural materials) that could be removed or modified to produce %E2%80%9Ccleaner%E2%80%9D neutron signals? This process was first developed and then applied to the axial neutron time of flight detectors at the ZFacility mentioned above. First, MCNPPoliMi was used to model relevant portions of the facility between the source and the detector locations. To obtain useful statistics, variance reduction was utilized. Then, the resulting collision output table produced by MCNPPoliMi was further analyzed by a MATLAB postprocessing code. This converted the energy deposited by neutron and photon interactions in the plastic scintillator (i.e., nTOF detector) into light output, in units of MeVee%D1%84 (electron equivalent) vs time. The time response of the detector was then folded into the signal via another MATLAB code. The simulated response was then compared with experimental data and shown to be in good agreement. To address the issue of neutron scattering, an %E2%80%9CIdeal Case,%E2%80%9D (i.e., a plastic scintillator was placed at the same distance from the source for each detector location) with no structural components in the problem. This was done to produce as %E2%80%9Cpure%E2%80%9D a neutron signal as possible. The simulated waveform from this %E2%80%9CIdeal Case%E2%80%9D was then compared with the simulated data from the %E2%80%9CFull Scale%E2%80%9D geometry (i.e., the detector at the same location, but with all the structural materials now included). The %E2%80%9CIdeal Case%E2%80%9D was subtracted from the %E2%80%9CFull Scale%E2%80%9D geometry case, and this was determined to be the contribution due to scattering. The time response was deconvolved out of the empirical data, and the contribution due to scattering was then subtracted out of it. A transformation was then made from dN/dt to dN/dE to obtain neutron spectra at two different detector locations.
Bledsoe, Keith C.
2015-04-01
The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratorys INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric. This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.
CHARACTERIZING COMPLEXITY IN SOLAR MAGNETOGRAM DATA USING A WAVELET-BASED SEGMENTATION METHOD
Kestener, P.; Khalil, A.; Arneodo, A.
2010-07-10
The multifractal nature of solar photospheric magnetic structures is studied using the two-dimensional wavelet transform modulus maxima (WTMM) method. This relies on computing partition functions from the wavelet transform skeleton defined by the WTMM method. This skeleton provides an adaptive space-scale partition of the fractal distribution under study, from which one can extract the multifractal singularity spectrum. We describe the implementation of a multiscale image processing segmentation procedure based on the partitioning of the WT skeleton, which allows the disentangling of the information concerning the multifractal properties of active regions from the surrounding quiet-Sun field. The quiet Sun exhibits an average Hoelder exponent {approx}-0.75, with observed multifractal properties due to the supergranular structure. On the other hand, active region multifractal spectra exhibit an average Hoelder exponent {approx}0.38, similar to those found when studying experimental data from turbulent flows.
Weyand, J.D.
1986-11-18
A method is disclosed of making a region exhibiting a range of compositions, comprising plasma spraying various compositions on top of one another onto a base. 2 figs.
Donets, Sergii; Pershin, Anton; Baeurle, Stephan A.
2015-05-14
Both the device composition and fabrication process are well-known to crucially affect the power conversion efficiency of polymer solar cells. Major advances have recently been achieved through the development of novel device materials and inkjet printing technologies, which permit to improve their durability and performance considerably. In this work, we demonstrate the usefulness of a recently developed field-based multiscale solar-cell algorithm to investigate the influence of the material characteristics, like, e.g., electrode surfaces, polymer architectures, and impurities in the active layer, as well as post-production treatments, like, e.g., electric field alignment, on the photovoltaic performance of block-copolymer solar-cell devices. Our study reveals that a short exposition time of the polymer bulk heterojunction to the action of an external electric field can lead to a low photovoltaic performance due to an incomplete alignment process, leading to undulated or disrupted nanophases. With increasing exposition time, the nanophases align in direction to the electric field lines, resulting in an increase of the number of continuous percolation paths and, ultimately, in a reduction of the number of exciton and charge-carrier losses. Moreover, we conclude by modifying the interaction strengths between the electrode surfaces and active layer components that a too low or too high affinity of an electrode surface to one of the components can lead to defective contacts, causing a deterioration of the device performance. Finally, we infer from the study of block-copolymer nanoparticle systems that particle impurities can significantly affect the nanostructure of the polymer matrix and reduce the photovoltaic performance of the active layer. For a critical volume fraction and size of the nanoparticles, we observe a complete phase transformation of the polymer nanomorphology, leading to a drop of the internal quantum efficiency. For other particle-numbers and -sizes, we observe only a local perturbation of the nanostructure, diminishing the number of continuous percolation paths to the electrodes and, therefore, reducing the device performance. From these investigations, we conclude that our multiscale solar-cell algorithm is an effective approach to investigate the impact of device materials and post-production treatments on the photovoltaic performance of polymer solar cells.
Callaway, J.M.
1982-08-01
Alternative methods for quantifying the economic impacts associated with future increases in the ambient concentration of CO/sub 2/ were examined. A literature search was undertaken, both to gain a better understanding of the ways in which CO/sub 2/ buildup could affect crop growth and to identify the different methods available for assessing the impacts of CO/sub 2/-induced environmental changes on crop yields. The second task involved identifying the scope of both the direct and indirect economic impacts that could occur as a result of CO/sub 2/-induced changes in crop yields. The third task then consisted of a comprehensive literature search to identify what types of economic models could be used effectively to assess the kinds of direct and indirect economic impacts that could conceivably occur as a result of CO/sub 2/ buildup. Specific attention was focused upon national and multi-regional agricultural sector models, multi-country agricultural trade models, and macroeconomic models of the US economy. The fourth and final task of this research involved synthesizing the information gathered in the previous tasks into a systematic framework for assessing the direct and indirect economic impacts of CO/sub 2/-induced environmental changes related to agricultural production.
Lin, YuPo J.; Hestekin, Jamie; Arora, Michelle; St. Martin, Edward J.
2004-09-28
An electrodeionization method for continuously producing and or separating and/or concentrating ionizable organics present in dilute concentrations in an ionic solution while controlling the pH to within one to one-half pH unit method for continuously producing and or separating and/or concentrating ionizable organics present in dilute concentrations in an ionic solution while controlling the pH to within one to one-half pH unit.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tan, Z.; Zhuang, Q.; Henze, D. K.; Frankenberg, C.; Dlugokencky, E.; Sweeney, C.; Turner, A. J.
2015-11-18
Understanding methane emissions from the Arctic, a fast warming carbon reservoir, is important for projecting changes in the global methane cycle under future climate scenarios. Here we optimize Arctic methane emissions with a nested-grid high-resolution inverse model by assimilating both high-precision surface measurements and column-average SCIAMACHY satellite retrievals of methane mole fraction. For the first time, methane emissions from lakes are integrated into an atmospheric transport and inversion estimate, together with prior wetland emissions estimated by six different biogeochemical models. We find that, the global methane emissions during July 2004June 2005 ranged from 496.4 to 511.5 Tg yr?1, with wetlandmoremethane emissions ranging from 130.0 to 203.3 Tg yr?1. The Arctic methane emissions during July 2004June 2005 were in the range of 14.630.4 Tg yr?1, with wetland and lake emissions ranging from 8.8 to 20.4 Tg yr?1 and from 5.4 to 7.9 Tg yr?1 respectively. Canadian and Siberian lakes contributed most of the estimated lake emissions. Due to insufficient measurements in the region, Arctic methane emissions are less constrained in northern Russia than in Alaska, northern Canada and Scandinavia. Comparison of different inversions indicates that the distribution of global and Arctic methane emissions is sensitive to prior wetland emissions. Evaluation with independent datasets shows that the global and Arctic inversions improve estimates of methane mixing ratios in boundary layer and free troposphere. The high-resolution inversions provide more details about the spatial distribution of methane emissions in the Arctic.less
An Approach to Enhance pnetCDF Performance in Environmental Modeling Applications
Wong, David; Yang, Cheng-En; Fu, Joshua S.; Wong, Kwai; Gao, Yang
2015-01-01
I/O has been considered as a bottleneck in parallel applications. The software package, pnetCDF which works with parallel file systems, was developed to address this issue and provide parallel I/O capability. This study examines the performance of a novel approach which performs data aggregation along either row or column dimension of the spatial domain, and then applies the pnetCDF parallel I/O paradigm. The test was done with three different domain sizes which represents small, moderately large and large data domains, using a small scale Community Multi-scale Air Quality model (CMAQ) mocked up code. The examination includes comparing I/O performance with traditional serial I/O technique, straight application of pnetCDF, and the data aggregation along row and column dimension before applying pnetCDF. After the comparison, optimal I/O configurations for this new novel approach were quantified. Data aggregation along the row dimension (pnetCDFcr) works better than along the column dimension (pnetCDFcc) although it may perform slightly worse than straight the pnetCDF method with a small number of processors. When the number of processors becomes larger, pnetCDFcr out performs pnetCDF significantly. If the number of processors keeps increasing, pnetCDF reaches a point that the performance is even worse than the serial I/O technique. This new approach has also been tested on a real application where it performs two times better than the straight pnetCDF paradigm.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Dingreville, Rémi; Karnesky, Richard A.; Puel, Guillaume; Schmitt, Jean -Hubert
2015-11-16
With the increasing interplay between experimental and computational approaches at multiple length scales, new research directions are emerging in materials science and computational mechanics. Such cooperative interactions find many applications in the development, characterization and design of complex material systems. This manuscript provides a broad and comprehensive overview of recent trends in which predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure–property relationships and study various physical phenomena and mechanisms. The focus of this review is on the intersections of multiscale materials experiments and modeling relevant to the materials mechanicsmore » community. After a general discussion on the perspective from various communities, the article focuses on the latest experimental and theoretical opportunities. Emphasis is given to the role of experiments in multiscale models, including insights into how computations can be used as discovery tools for materials engineering, rather than to “simply” support experimental work. This is illustrated by examples from several application areas on structural materials. In conclusion this manuscript ends with a discussion on some problems and open scientific questions that are being explored in order to advance this relatively new field of research.« less
Dingreville, Rémi; Karnesky, Richard A.; Puel, Guillaume; Schmitt, Jean -Hubert
2015-11-16
With the increasing interplay between experimental and computational approaches at multiple length scales, new research directions are emerging in materials science and computational mechanics. Such cooperative interactions find many applications in the development, characterization and design of complex material systems. This manuscript provides a broad and comprehensive overview of recent trends in which predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure–property relationships and study various physical phenomena and mechanisms. The focus of this review is on the intersections of multiscale materials experiments and modeling relevant to the materials mechanics community. After a general discussion on the perspective from various communities, the article focuses on the latest experimental and theoretical opportunities. Emphasis is given to the role of experiments in multiscale models, including insights into how computations can be used as discovery tools for materials engineering, rather than to “simply” support experimental work. This is illustrated by examples from several application areas on structural materials. In conclusion this manuscript ends with a discussion on some problems and open scientific questions that are being explored in order to advance this relatively new field of research.
Broader source: Energy.gov [DOE]
Presentation given by Mississippi State University at 2015 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about a systematic...
Broader source: Energy.gov [DOE]
2012 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting
Jack Parker
2007-04-19
The task objectives are: (1) Gain an improved understanding of hydrologic, geochemical and biological processes and their interactions at relevant time and space scales; and (2) Develop practical, site-independent tools for evaluating effects of natural and engineered processes on long-term performance.
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28
Based on the project's scope, the purpose of the estimate, and the availability of estimating resources, the estimator can choose one or a combination of techniques when estimating an activity or project. Estimating methods, estimating indirect and direct costs, and other estimating considerations are discussed in this chapter.
Broader source: Energy.gov [DOE]
This document is a pre-publication Federal Register supplemental notice of proposed rulemaking regarding energy conservation standards for alternative efficiency determination methods, basic model definition, and compliance for commercial HVAC, Refrigeration, and Water Heating Equipment, as issued by the Deputy Assistant Secretary for Energy Efficiency on September 18, 2014. Though it is not intended or expected, should any discrepancy occur between the document posted here and the document published in the Federal Register, the Federal Register publication controls. This document is being made available through the Internet solely as a means to facilitate the public's access to this document.
Wang, Hailong; Rasch, Philip J.; Easter, Richard C.; Singh, Balwinder; Zhang, Rudong; Ma, Po-Lun; Qian, Yun; Ghan, Steven J.; Beagley, Nathaniel
2014-11-27
We introduce an explicit emission tagging technique in the Community Atmosphere Model to quantify source-region-resolved characteristics of black carbon (BC), focusing on the Arctic. Explicit tagging of BC source regions without perturbing the emissions makes it straightforward to establish source-receptor relationships and transport pathways, providing a physically consistent and computationally efficient approach to produce a detailed characterization of the destiny of regional BC emissions and the potential for mitigation actions. Our analysis shows that the contributions of major source regions to the global BC burden are not proportional to the respective emissions due to strong region-dependent removal rates and lifetimes, while the contributions to BC direct radiative forcing show a near-linear dependence on their respective contributions to the burden. Distant sources contribute to BC in remote regions mostly in the mid- and upper troposphere, having much less impact on lower-level concentrations (and deposition) than on burden. Arctic BC concentrations, deposition and source contributions all have strong seasonal variations. Eastern Asia contributes the most to the wintertime Arctic burden. Northern Europe emissions are more important to both surface concentration and deposition in winter than in summer. The largest contribution to Arctic BC in the summer is from Northern Asia. Although local emissions contribute less than 10% to the annual mean BC burden and deposition within the Arctic, the per-emission efficiency is much higher than for major non-Arctic sources. The interannual variability (1996-2005) due to meteorology is small in annual mean BC burden and radiative forcing but is significant in yearly seasonal means over the Arctic. When a slow aging treatment of BC is introduced, the increase of BC lifetime and burden is source-dependent. Global BC forcing-per-burden efficiency also increases primarily due to changes in BC vertical distributions. The relative contribution from major non-Arctic sources to the Arctic BC burden increases only slightly, although the contribution of Arctic local sources is reduced by a factor of 2 due to the slow aging treatment.
Novel method for carbon nanofilament growth on carbon fibers
Phillips, Johathan; Luhrs, Claudia; Terani, Mehran; Al - Haik, Marwan; Garcia, Daniel; Taha, Mahmoud R
2009-01-01
Fiber reinforced structural composites such as fiber reinforced polymers (FRPs) have proven to be key materials for blast mitigation due to their enhanced mechanical performance. However, there is a need to further increase total energy absorption of the composites in order to retain structural integrity in high energy environments, for example, blast events. Research has shown that composite failure in high energy environments can be traced to their relatively low shear strength attributed to the limited bond strength between the matrix and the fibers. One area of focus for improving the strength of composite materials has been to create 'multi-scale' composites. The most common approach to date is to introduce carbon nanotubes into a more traditional composite consisting of epoxy with embedded micron scale fibers. The inclusion of carbon nanotubes (CNT) clearly toughens different matrices. Depositing CNT in brittle matrix increases stiffness by orders of magnitude. Currently, this approach to create multiscale composites is limited due to the difficulty of dispersing significant amounts of nanotubes. It has repeatedly been reported that phase separation occurs above relatively low weight percent loading (ca. 3%) due to the strong van der Waals forces between CNTs compared with that between CNT and polymer. Hence, the nanotubes tend to segregate and form inclusions. One means to prevent nanotube or nanofilament agglomeration is to anchor one end of the nanostructure, thereby creating a stable multi-phase structure. This is most easily done by literally growing the CNTs directly on micron scale fibers. Recently, CNT were grown on carbon fibers, both polyacrylonitrile- (PAN-) and pitch-based, by hot filament chemical vapor deposition (HFCVD) using H2 and CH4 as precursors. Nickel clusters were electrodeposited on the fiber surfaces to catalyze the growth and uniform CNT coatings were obtained on both the PAN- and pitch-based carbon fibers. Multiwalled CNTs with smooth walls and low impurity content were grown. Carbon nanofibers were also grown on a carbon fiber cloth using plasma enhanced chemical vapor deposition (CVD) from a mixture of acetylene and ammonia. In this case, a cobalt colloid was used to achieve a good coverage of nanofibers on carbon fibers in the cloth. Caveats to CNT growth include damage in the carbon fiber surface due to high-temperatures (>800 C). More recently, Qu et al. reported a new method for uniform deposition of CNT on carbon fibers. However, this method requires processing at 1100 C in the presence of oxygen and such high temperature is anticipated to deepen the damage in the carbon fibers. In the present work, multi-scale filaments (herein, linear carbon structures with multi-micron diameter are called 'fibers', all structures with sub-micron diameter are called 'filaments') were created with a low temperature (ca. 550 C) alternative to CVD growth of CNTs. Specifically, nano-scale filaments were rapidly generated (> 10 microns/hour) on commercial micron scale fibers via catalytic (Pd particles) growth from a fuel rich combustion environment at atmospheric pressure. This atmospheric pressure process, derived from the process called Graphitic Growth by Design (GSD), is rapid, the maximum temperature low enough (below 700 C) to avoid structural damage and the process inexpensive and readily scalable. In some cases, a significant and unexpected aspect of the process was the generation of 'three scale' materials. That is, materials with these three size characteristics were produced: (1) micrometer scale commercial PAN fibers, (2) a layer of 'long' sub-micrometer diameter scale carbon filaments, and (3) a dense layer of 'short' nanometer diameter filaments.
Hierarchical Models for Batteries: Overview with Some Case Studies
Pannala, Sreekanth; Mukherjee, Partha P; Allu, Srikanth; Nanda, Jagjit; Martha, Surendra K; Dudney, Nancy J; Turner, John A
2012-01-01
Batteries are complex multiscale systems and a hierarchy of models has been employed to study different aspects of batteries at different resolutions. For the electrochemistry and charge transport, the models span from electric circuits, single-particle, pseudo 2D, detailed 3D, and microstructure resolved at the continuum scales and various techniques such as molecular dynamics and density functional theory to resolve the atomistic structure. Similar analogies exist for the thermal, mechanical, and electrical aspects of the batteries. We have been recently working on the development of a unified formulation for the continuum scales across the electrode-electrolyte-electrode system - using a rigorous volume averaging approach typical of multiphase formulation. This formulation accounts for any spatio-temporal variation of the different properties such as electrode/void volume fractions and anisotropic conductivities. In this talk the following will be presented: The background and the hierarchy of models that need to be integrated into a battery modeling framework to carry out predictive simulations, Our recent work on the unified 3D formulation addressing the missing links in the multiscale description of the batteries, Our work on microstructure resolved simulations for diffusion processes, Upscaling of quantities of interest to construct closures for the 3D continuum description, Sample results for a standard Carbon/Spinel cell will be presented and compared to experimental data, Finally, the infrastructure we are building to bring together components with different physics operating at different resolution will be presented. The presentation will also include details about how this generalized approach can be applied to other electrochemical storage systems such as supercapacitors, Li-Air batteries, and Lithium batteries with 3D architectures.
Segre, Daniel
2015-12-09
The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.
Townsend, R.G.
1959-08-25
A method is described for protectively coating beryllium metal by etching the metal in an acid bath, immersing the etched beryllium in a solution of sodium zincate for a brief period of time, immersing the beryllium in concentrated nitric acid, immersing the beryhlium in a second solution of sodium zincate, electroplating a thin layer of copper over the beryllium, and finally electroplating a layer of chromium over the copper layer.
Oberai, Assad A
2013-07-16
In the report we present a summary of the new models and algorithms developed by the PI and the students supported by this grant. These developments are described in detail in ten peer-reviewed journal articles that acknowledge support from this grant.
Rockhold, Mark L.; Sullivan, E. C.; Murray, Christopher J.; Last, George V.; Black, Gary D.
2009-09-30
Pacific Northwest National Laboratory (PNNL) has embarked on an initiative to develop world-class capabilities for performing experimental and computational analyses associated with geologic sequestration of carbon dioxide. The ultimate goal of this initiative is to provide science-based solutions for helping to mitigate the adverse effects of greenhouse gas emissions. This Laboratory-Directed Research and Development (LDRD) initiative currently has two primary focus areas—advanced experimental methods and computational analysis. The experimental methods focus area involves the development of new experimental capabilities, supported in part by the U.S. Department of Energy’s (DOE) Environmental Molecular Science Laboratory (EMSL) housed at PNNL, for quantifying mineral reaction kinetics with CO2 under high temperature and pressure (supercritical) conditions. The computational analysis focus area involves numerical simulation of coupled, multi-scale processes associated with CO2 sequestration in geologic media, and the development of software to facilitate building and parameterizing conceptual and numerical models of subsurface reservoirs that represent geologic repositories for injected CO2. This report describes work in support of the computational analysis focus area. The computational analysis focus area currently consists of several collaborative research projects. These are all geared towards the development and application of conceptual and numerical models for geologic sequestration of CO2. The software being developed for this focus area is referred to as the Geologic Sequestration Software Suite or GS3. A wiki-based software framework is being developed to support GS3. This report summarizes work performed in FY09 on one of the LDRD projects in the computational analysis focus area. The title of this project is Data Assimilation Tools for CO2 Reservoir Model Development. Some key objectives of this project in FY09 were to assess the current state-of-the-art in reservoir model development, the data types and analyses that need to be performed in order to develop and parameterize credible and robust reservoir simulation models, and to review existing software that is applicable to these analyses. This report describes this effort and highlights areas in which additional software development, wiki application extensions, or related GS3 infrastructure development may be warranted.
He, W.; Anderson, R.N.
1998-08-25
A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management. 20 figs.
He, Wei; Anderson, Roger N.
1998-01-01
A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management.
Walls, Claudia A.; Kirby, Glen H.; Janney, Mark A.; Omatete, Ogbemi O.; Nunn, Stephen D.; McMillan, April D.
2000-01-01
A method of gelcasting includes the steps of providing a solution of at least hydroxymethylacrylamide (HMAM) and water. At least one inorganic powder is added to the mixture. At least one initiator system is provided to polymerize the HMAM. The initiator polymerizes the HMAM and water, to form a firm hydrogel that contains the inorganic powder. One or more comonomers can be polymerized with the HMAM monomer, to alter the final properties of the gelcast material. Additionally, one or more additives can be included in the polymerization mixture, to alter the properties of the gelcast material.
Peigney, B. E.; Larroche, O.
2014-12-15
In this article, we study the hydrodynamics and burn of the thermonuclear fuel in inertial confinement fusion pellets at the ion kinetic level. The analysis is based on a two-velocity-scale Vlasov-Fokker-Planck kinetic model that is specially tailored to treat fusion products (suprathermal α-particles) in a self-consistent manner with the thermal bulk. The model assumes spherical symmetry in configuration space and axial symmetry in velocity space around the mean flow velocity. A typical hot-spot ignition design is considered. Compared with fluid simulations where a multi-group diffusion scheme is applied to model α transport, the full ion-kinetic approach reveals significant non-local effects on the transport of energetic α-particles. This has a direct impact on hydrodynamic spatial profiles during combustion: the hot spot reactivity is reduced, while the inner dense fuel layers are pre-heated by the escaping α-suprathermal particles, which are transported farther out of the hot spot. We show how the kinetic transport enhancement of fusion products leads to a significant reduction of the fusion yield.
Ward, Donald K.; Zhou, Xiaowang; Karnesky, Richard A.; Kolasinski, Robert; Foster, Michael E.; Thurmer, Konrad; Chao, Paul; Epperly, Ethan Nicholas; Zimmerman, Jonathan A.; Wong, Bryan M.; Sills, Ryan B.
2015-09-01
Current austenitic stainless steel storage reservoirs for hydrogen isotopes (e.g. deuterium and tritium) have performance and operational life-limiting interactions (e.g. embrittlement) with H-isotopes. Aluminum alloys (e.g.AA2219), alternatively, have very low H-isotope solubilities, suggesting high resistance towards aging vulnerabilities. This report summarizes the work performed during the life of the Lab Directed Research and Development in the Nuclear Weapons investment area (165724), and provides invaluable modeling and experimental insights into the interactions of H isotopes with surfaces and bulk AlCu-alloys. The modeling work establishes and builds a multi-scale framework which includes: a density functional theory informed bond-order potential for classical molecular dynamics (MD), and subsequent use of MD simulations to inform defect level dislocation dynamics models. Furthermore, low energy ion scattering and thermal desorption spectroscopy experiments are performed to validate these models and add greater physical understanding to them.
The Lagrangian particle dispersion model FLEXPART-WRF VERSION 3.1
Brioude, J.; Arnold, D.; Stohl, A.; Cassiani, M.; Morton, Don; Seibert, P.; Angevine, W. M.; Evan, S.; Dingwell, A.; Fast, Jerome D.; Easter, Richard C.; Pisso, I.; Bukhart, J.; Wotawa, G.
2013-11-01
The Lagrangian particle dispersion model FLEXPART was originally designed for cal- culating long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis at different scales. This multiscale need from the modeler community has encouraged new developments in FLEXPART. In this document, we present a version that works with the Weather Research and Forecasting (WRF) mesoscale meteoro- logical model. Simple procedures on how to run FLEXPART-WRF are presented along with special options and features that differ from its predecessor versions. In addition, test case data, the source code and visualization tools are provided to the reader as supplementary material.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Application of the Projection-Based Embedding Method Taylor B arnes NERSC A nnual M ee4ng, F eb. 2 4, 2 015 Outline Applica'on: I nves'ga'on o f t he Oxida've D ecomposi'on o f Lithium---Ion Ba=ery Solvents Development o f t he A ccurate Projec'on---Based W FT---in---DFT Embedding M ethod WFT DFT Outline Applica'on: I nves'ga'on o f t he Oxida've D ecomposi'on o f Lithium---Ion Ba=ery Solvents Development o f t he A ccurate Projec'on---Based W FT---in---DFT Embedding M ethod WFT DFT The S
Grover, Blair K.; Hubbell, Joel M.; Sisson, James B.; Casper, William L.
2005-12-20
A method for collecting data regarding a matric potential of a media includes providing a tensiometer having a stainless steel tensiometer casing, the stainless steel tensiometer casing comprising a tip portion which includes a wetted porous stainless steel membrane through which a matric potential of a media is sensed; driving the tensiometer into the media using an insertion tube comprising a plurality of probe casing which are selectively coupled to form the insertion tube as the tensiometer is progressively driven deeper into the media, wherein the wetted porous stainless steel membrane is in contact with the media; and sensing the matric potential the media exerts on the wetted porous stainless steel membrane by a pressure sensor in fluid hydraulic connection with the porous stainless steel membrane. A tensiometer includes a stainless steel casing.
Cornell, A.A.; Dunbar, J.V.; Ruffner, J.H.
1959-09-29
A semi-automatic method is described for the weld joining of pipes and fittings which utilizes the inert gasshielded consumable electrode electric arc welding technique, comprising laying down the root pass at a first peripheral velocity and thereafter laying down the filler passes over the root pass necessary to complete the weld by revolving the pipes and fittings at a second peripheral velocity different from the first peripheral velocity, maintaining the welding head in a fixed position as to the specific direction of revolution, while the longitudinal axis of the welding head is disposed angularly in the direction of revolution at amounts between twenty minutas and about four degrees from the first position.
Marsden, Kenneth C.; Meyer, Mitchell K.; Grover, Blair K.; Fielding, Randall S.; Wolfensberger, Billy W.
2012-12-18
A casting device includes a covered crucible having a top opening and a bottom orifice, a lid covering the top opening, a stopper rod sealing the bottom orifice, and a reusable mold having at least one chamber, a top end of the chamber being open to and positioned below the bottom orifice and a vacuum tap into the chamber being below the top end of the chamber. A casting method includes charging a crucible with a solid material and covering the crucible, heating the crucible, melting the material, evacuating a chamber of a mold to less than 1 atm absolute through a vacuum tap into the chamber, draining the melted material into the evacuated chamber, solidifying the material in the chamber, and removing the solidified material from the chamber without damaging the chamber.
Verdant Current Modeling Methods And Validation
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
G. R. Odette; G. E. Lucas
2005-11-15
This final report on "In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation" (DE-FG03-01ER54632) consists of a series of summaries of work that has been published, or presented at meetings, or both. It briefly describes results on the following topics: 1) A Transport and Fate Model for Helium and Helium Management; 2) Atomistic Studies of Point Defect Energetics, Dynamics and Interactions; 3) Multiscale Modeling of Fracture consisting of: 3a) A Micromechanical Model of the Master Curve (MC) Universal Fracture Toughness-Temperature Curve Relation, KJc(T - To), 3b) An Embrittlement DTo Prediction Model for the Irradiation Hardening Dominated Regime, 3c) Non-hardening Irradiation Assisted Thermal and Helium Embrittlement of 8Cr Tempered Martensitic Steels: Compilation and Analysis of Existing Data, 3d) A Model for the KJc(T) of a High Strength NFA MA957, 3e) Cracked Body Size and Geometry Effects of Measured and Effective Fracture Toughness-Model Based MC and To Evaluations of F82H and Eurofer 97, 3-f) Size and Geometry Effects on the Effective Toughness of Cracked Fusion Structures; 4) Modeling the Multiscale Mechanics of Flow Localization-Ductility Loss in Irradiation Damaged BCC Alloys; and 5) A Universal Relation Between Indentation Hardness and True Stress-Strain Constitutive Behavior. Further details can be found in the cited references or presentations that generally can be accessed on the internet, or provided upon request to the authors. Finally, it is noted that this effort was integrated with our base program in fusion materials, also funded by the DOE OFES.
Schatzinger, R.A.; Tomutsa, L.
1997-08-01
In order to accurately predict fluid flow within a reservoir, variability in the rock properties at all scales relevant to the specific depositional environment needs to be taken into account. The present work describes rock variability at scales from hundreds of meters (facies level) to millimeters (laminae) based on outcrop studies of the Almond Formation. Tidal channel, tidal delta and foreshore facies were sampled on the eastern flank of the Rock Springs uplift, southeast of Rock Springs, Wyoming. The Almond Fm. was deposited as part of a mesotidal Upper Cretaceous transgressive systems tract within the greater Green River Basin. Bedding style, lithology, lateral extent of beds of bedsets, bed thickness, amount and distribution of depositional clay matrix, bioturbation and grain sorting provide controls on sandstone properties that may vary more than an order of magnitude within and between depositional facies in outcrops of the Almond Formation. These features can be mapped on the scale of an outcrop. The products of diagenesis such as the relative timing of carbonate cement, scale of cemented zones, continuity of cemented zones, selectively leached framework grains, lateral variability of compaction of sedimentary rock fragments, and the resultant pore structure play an equally important, although less predictable role in determining rock property heterogeneity. A knowledge of the spatial distribution of the products of diagenesis such as calcite cement or compaction is critical to modeling variation even within a single facies in the Almond Fin. because diagenesis can enhance or reduce primary (depositional) rock property heterogeneity. Application of outcrop heterogeneity models to the subsurface is greatly hindered by differences in diagenesis between the two settings. The measurements upon which this study is based were performed both on drilled outcrop plugs and on blocks.
Mukul M. Sharma; Steven L. Bryant; Carlos Torres-Verdin; George Hirasaki
2007-09-30
The petrophysical properties of rocks, particularly their relative permeability and wettability, strongly influence the efficiency and the time-scale of all hydrocarbon recovery processes. However, the quantitative relationships needed to account for the influence of wettability and pore structure on multi-phase flow are not yet available, largely due to the complexity of the phenomena controlling wettability and the difficulty of characterizing rock properties at the relevant length scales. This project brings together several advanced technologies to characterize pore structure and wettability. Grain-scale models are developed that help to better interpret the electric and dielectric response of rocks. These studies allow the computation of realistic configurations of two immiscible fluids as a function of wettability and geologic characteristics. These fluid configurations form a basis for predicting and explaining macroscopic behavior, including the relationship between relative permeability, wettability and laboratory and wireline log measurements of NMR and dielectric response. Dielectric and NMR measurements have been made show that the response of the rocks depends on the wetting and flow properties of the rock. The theoretical models can be used for a better interpretation and inversion of standard well logs to obtain accurate and reliable estimates of fluid saturation and of their producibility. The ultimate benefit of this combined theoretical/empirical approach for reservoir characterization is that rather than reproducing the behavior of any particular sample or set of samples, it can explain and predict trends in behavior that can be applied at a range of length scales, including correlation with wireline logs, seismic, and geologic units and strata. This approach can substantially enhance wireline log interpretation for reservoir characterization and provide better descriptions, at several scales, of crucial reservoir flow properties that govern oil recovery.
Method for determining gene knockouts
Maranas, Costa D; Burgard, Anthony R; Pharkya, Priti
2013-06-04
A method for determining candidates for gene deletions and additions using a model of a metabolic network associated with an organism, the model includes a plurality of metabolic reactions defining metabolite relationships, the method includes selecting a bioengineering objective for the organism, selecting at least one cellular objective, forming an optimization problem that couples the at least one cellular objective with the bioengineering objective, and solving the optimization problem to yield at least one candidate.
Method for determining gene knockouts
Maranas, Costas D.; Burgard, Anthony R.; Pharkya, Priti
2011-09-27
A method for determining candidates for gene deletions and additions using a model of a metabolic network associated with an organism, the model includes a plurality of metabolic reactions defining metabolite relationships, the method includes selecting a bioengineering objective for the organism, selecting at least one cellular objective, forming an optimization problem that couples the at least one cellular objective with the bioengineering objective, and solving the optimization problem to yield at least one candidate.
Barnette, Daniel W.
2002-01-01
The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.
MO-G-BRF-07: Anomalously Fast Diffusion of Carbon Nanotubes Carriers in 3D Tissue Model
Wang, Y; Bahng, J; Kotov, N
2014-06-15
Purpose: We aim to investigate and understand diffusion process of carbon nanotubes (CNTs) and other nanoscale particles in tissue and organs. Methods: In this research, we utilized a 3D model tissue of hepatocellular carcinoma (HCC)cultured in inverted colloidal crystal (ICC) scaffolds to compare the diffusivity of CNTs with small molecules such as Rhodamine and FITC in vitro, and further investigated the transportation of CNTs with and without targeting ligand, TGFβ1. The real-time permeation profiles of CNTs in HCC tissue model with high temporal and spatial resolution was demonstrated by using standard confocal microscopy. Quantitative analysis of the diffusion process in 3D was carried out using luminescence intensity in a series of Z-stack images obtained for different time points of the diffusion process after initial addition of CNTs or small molecules to the cell culture and the image data was analyzed by software ImageJ and Mathematica. Results: CNTs display diffusion rate in model tissues substantially faster than small molecules of the similar charge such as FITC, and the diffusion rate of CNTs are significantly enhanced with targeting ligand, TGFβ1. Conclusion: In terms of the advantages of in-vitro model, we were able to have access to measuring the rate of CNT penetration at designed conditions with variable parameters. And the findings by using this model, changed our understanding about advantages of CNTs as nanoscale drug carriers and provides design principles for making new drug carriers for both treatment and diagnostics. Additionally the fast diffusion opens the discussion of the best possible drug carriers to reach deep parts of cancerous tissues, which is often a prerequisite for successful cancer treatment. This work was supported by the Center for Photonic and Multiscale Nanomaterials funded by National Science Foundation Materials Research Science and Engineering Center program DMR 1120923. The work was also partially supported by NSF grant ECS-0601345; EFRI-BSBA 0938019; CBET 0933384; CBET 0932823; CBET 1036672, AFOSR MURI 444286-P061716 and NIH 1R21CA121841-01A2.
Methods Data Qualification Interim Report
R. Sam Alessi; Tami Grimmett; Leng Vang; Dave McGrath
2010-09-01
The overall goal of the Next Generation Nuclear Plant (NGNP) Data Management and Analysis System (NDMAS) is to maintain data provenance for all NGNP data including the Methods component of NGNP data. Multiple means are available to access data stored in NDMAS. A web portal environment allows users to access data, view the results of qualification tests and view graphs and charts of various attributes of the data. NDMAS also has methods for the management of the data output from VHTR simulation models and data generated from experiments designed to verify and validate the simulation codes. These simulation models represent the outcome of mathematical representation of VHTR components and systems. The methods data management approaches described herein will handle data that arise from experiment, simulation, and external sources for the main purpose of facilitating parameter estimation and model verification and validation (V&V). A model integration environment entitled ModelCenter is used to automate the storing of data from simulation model runs to the NDMAS repository. This approach does not adversely change the why computational scientists conduct their work. The method is to be used mainly to store the results of model runs that need to be preserved for auditing purposes or for display to the NDMAS web portal. This interim report demonstrates the currently development of NDMAS for Methods data and discusses data and its qualification that is currently part of NDMAS.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tourret, D.; Karma, A.; Clarke, A. J.; Gibbs, P. J.; Imhoff, S. D.
2015-06-11
We present a three-dimensional (3D) extension of a previously proposed multi-scale Dendritic Needle Network (DNN) approach for the growth of complex dendritic microstructures. Using a new formulation of the DNN dynamics equations for dendritic paraboloid-branches of a given thickness, one can directly extend the DNN approach to 3D modeling. We validate this new formulation against known scaling laws and analytical solutions that describe the early transient and steady-state growth regimes, respectively. Finally, we compare the predictions of the model to in situ X-ray imaging of Al-Cu alloy solidification experiments. The comparison shows a very good quantitative agreement between 3D simulationsmore » and thin sample experiments. It also highlights the importance of full 3D modeling to accurately predict the primary dendrite arm spacing that is significantly over-estimated by 2D simulations.« less
Day-Lewis, Frederick; Singha, Kamini; Haggerty, Roy; Johnson, Timothy; Binley, Andrew; Lane, John
2014-03-10
. In this project, we sought to capitalize on the geophysical signatures of mass transfer. Previous numerical modeling and pilot-scale field experiments suggested that mass transfer produces a geoelectrical signature—a hysteretic relation between sampled (mobile-domain) fluid conductivity and bulk (mobile + immobile) conductivity—over a range of scales relevant to aquifer remediation. In this work, we investigated the geoelectrical signature of mass transfer during tracer transport in a series of controlled experiments to determine the operation of controlling parameters, and also investigated the use of complex-resistivity (CR) as a means of quantifying mass transfer parameters in situ without tracer experiments. In an add-on component to our grant, we additionally considered nuclear magnetic resonance (NMR) to help parse mobile from immobile porosities. Our study objectives were to: 1. Develop and demonstrate geophysical approaches to measure mass-transfer parameters spatially and over a range of scales, including the combination of electrical resistivity monitoring, tracer tests, complex resistivity, nuclear magnetic resonance, and materials characterization; and 2. Provide mass-transfer estimates for improved understanding of contaminant fate and transport at DOE sites, such as uranium transport at the Hanford 300 Area. To achieve our objectives, we implemented a 3-part research plan involving (1) development of computer codes and techniques to estimate mass-transfer parameters from time-lapse electrical data; (2) bench-scale experiments on synthetic materials and materials from cores from the Hanford 300 Area; and (3) field demonstration experiments at the DOE’s Hanford 300 Area.
Crandall, Dustin; Soeder, Daniel J; McDannell, Kalin T.; Mroz, Thomas
2010-01-01
Historic data from the Department of Energy Eastern Gas Shale Project (ESGP) were compiled to develop a database of geochemical analyses, well logs, lithological and natural fracture descriptions from oriented core, and reservoir parameters. The nine EGSP wells were located throughout the Appalachian Basin and intercepted the Marcellus Shale from depths of 750 meters (2500 ft) to 2500 meters (8200 ft). A primary goal of this research is to use these existing data to help construct a geologic framework model of the Marcellus Shale across the basin and link rock properties to gas productivity. In addition to the historic data, x-ray computerized tomography (CT) of entire cores with a voxel resolution of 240mm and optical microscopy to quantify mineral and organic volumes was performed. Porosity and permeability measurements in a high resolution, steady-state flow apparatus are also planned. Earth Vision software was utilized to display and perform volumetric calculations on individual wells, small areas with several horizontal wells, and on a regional basis. The results indicate that the lithologic character of the Marcellus Shale changes across the basin. Gas productivity appears to be influenced by the properties of the organic material and the mineral composition of the rock, local and regional structural features, the current state of in-situ stress, and lithologic controls on the geometry of induced fractures during stimulations. The recoverable gas volume from the Marcellus Shale is variable over the vertical stratigraphic section, as well as laterally across the basin. The results from this study are expected to help improve the assessment of the resource, and help optimize the recovery of natural gas.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Cats, K. H.; Andrews, J. C.; Stephan, O.; March, K.; Karunakaran, C.; Meirer, F.; de Groot, F. M. F.; Weckhuysen, B. M.
2016-02-16
In this study, the Fischer-Tropsch synthesis (FTS) reaction is one of the most promising processes to convert alternative energy sources, such as natural gas, coal or biomass, into liquid fuels and other high-value products. Despite its commercial implementation, we still lack fundamental insights into the various deactivation processes taking place during FTS. In this work, a combination of three methods for studying single catalyst particles at different length scales has been developed and applied to study the deactivation of Co/TiO2 Fischer-Tropsch synthesis (FTS) catalysts. By combining transmission X-ray microscopy (TXM), scanning transmission X-ray microscopy (STXM) and scanning transmission electron microscopy-electronmore » energy loss spectroscopy (STEM-EELS) we visualized changes in the structure, aggregate size and distribution of supported Co nanoparticles that occur during FTS. At the microscale, Co nanoparticle aggregates are transported over several μm leading to a more homogeneous Co distribution, while at the nanoscale Co forms a thin layer of ~1-2 nm around the TiO2 support. The formation of the Co layer is the opposite case to the “classical” strong metal-support interaction (SMSI) in which TiO2 surrounds the Co, and is possibly related to the surface oxidation of Co metal nanoparticles in combination with coke formation. In other words, the observed migration and formation of a thin CoOx layer are similar to a previously discussed reaction-induced spreading of metal oxides across a TiO2 surface.« less
Veser, Goetz
2009-08-31
Nanomaterials have gained much attention as catalysts since the discovery of exceptional CO oxidation activity of nanoscale gold by Haruta. However, many studies avoid testing nanomaterials at the high-temperatures relevant to reactions of interest for the production of clean energy (T > 700C). The generally poor thermal stability of catalytically active noble metals has thus far prevented significant progress in this area. We have recently overcome the poor thermal stability of nanoparticles by synthesizing a platinum barium-hexaaluminate (Pt-BHA) nanocomposite which combines the high activity of noble metal nanoparticles with the thermal stability of hexaaluminates. This Pt-BHA nanocomposite demonstrates excellent activity, selectivity, and long-term stability in CPOM. Pt-BHA is anchored onto a variety of support structures in order to improve the accessibility, safety, and reactivity of the nanocatalyst. Silica felts prove to be particularly amenable to this supporting procedure, with the resulting supported nanocatalyst proving to be as active and stable for CPOM as its unsupported counterpart. Various pre-treatment conditions are evaluated to determine their effectiveness in removing residual surfactant from the active nanoscale platinum particles. The size of these particles is measured across a wide temperature range, and the resulting plateau of stability from 600-900C can be linked to a particle caging effect due to the structure of the supporting ceramic framework. The nanocomposites are used to catalyze the combustion of a dilute methane stream, and the results indicate enhanced activity for both Pt-BHA as well as ceria-doped BHA, as well as an absence of internal mass transfer limitations at the conditions tested. In water-gas shift reaction, nanocomposite Pt-BHA shows stability during prolonged WGS reaction and no signs of deactivation during start-up/shut-down of the reactor. The chemical and thermal stability, low molecular weight, and wealth of literature on the formation of mesoporous silica materials motivated investigations of nanocomposite silica catalysts. High surface area silicas are synthesized via sol-gel methods, and the addition of metal-salts lead to the formation of stable nanocomposite Ni- and Fe- silicates. The results of these investigations have increased the fundamental understanding and improved the applicability of nanocatalysts for clean energy applications.
Evaluation of DUSTRAN Software System for Modeling Chloride Deposition on Steel Canisters
Tran, Tracy T.; Jensen, Philip J.; Fritz, Brad G.; Rutz, Frederick C.; Devanathan, Ram
2015-07-29
The degradation of steel by stress corrosion cracking (SCC) when exposed to atmospheric conditions for decades is a significant challenge in the fossil fuel and nuclear industries. SCC can occur when corrosive contaminants such as chlorides are deposited on a susceptible material in a tensile stress state. The Nuclear Regulatory Commission has identified chloride-induced SCC as a potential cause for concern in stainless steel used nuclear fuel (UNF) canisters in dry storage. The modeling of contaminant deposition is the first step in predictive multiscale modeling of SCC that is essential to develop mitigation strategies, prioritize inspection, and ensure the integrity and performance of canisters, pipelines, and structural materials. A multiscale simulation approach can be developed to determine the likelihood that a canister would undergo SCC in a certain period of time. This study investigates the potential of DUSTRAN, a dust dispersion modeling system developed by Pacific Northwest National Laboratory, to model the deposition of chloride contaminants from sea salt aerosols on a steel canister. Results from DUSTRAN simulations run with historical meteorological data were compared against measured chloride data at a coastal site in Maine. DUSTRANs CALPUFF model tended to simulate concentrations higher than those measured; however, the closest estimations were within the same order of magnitude as the measured values. The decrease in discrepancies between measured and simulated values as the level of abstraction in wind speed decreased suggest that the model is very sensitive to wind speed. However, the influence of other parameters such as the distinction between open-ocean and surf-zone sources needs to be explored further. Deposition values predicted by the DUSTRAN system were not in agreement with concentration values and suggest that the deposition calculations may not fully represent physical processes. Overall, results indicate that with parameter refinement, DUSTRAN has the potential to simulate atmospheric chloride dispersion on steel canisters.
William J. Gutowski; Joseph M. Prusa, Piotr K. Smolarkiewicz
2012-04-09
This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAG's advanced dynamics core with the 'physics' of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer-reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited.
Stephen A. Holditch; Emrys Jones
2002-09-01
In 2000, Chevron began a project to learn how to characterize the natural gas hydrate deposits in the deepwater portions of the Gulf of Mexico. A Joint Industry Participation (JIP) group was formed in 2001, and a project partially funded by the U.S. Department of Energy (DOE) began in October 2001. The primary objective of this project is to develop technology and data to assist in the characterization of naturally occurring gas hydrates in the deepwater Gulf of Mexico. These naturally occurring gas hydrates can cause problems relating to drilling and production of oil and gas, as well as building and operating pipelines. Other objectives of this project are to better understand how natural gas hydrates can affect seafloor stability, to gather data that can be used to study climate change, and to determine how the results of this project can be used to assess if and how gas hydrates act as a trapping mechanism for shallow oil or gas reservoirs. As part of the project, three workshops were held. The first was a data collection workshop, held in Houston during March 14-15, 2002. The purpose of this workshop was to find out what data exist on gas hydrates and to begin making that data available to the JIP. The second and third workshop, on Geoscience and Reservoir Modeling, and Drilling and Coring Methods, respectively, were held simultaneously in Houston during May 9-10, 2002. The Modeling Workshop was conducted to find out what data the various engineers, scientists and geoscientists want the JIP to collect in both the field and the laboratory. The Drilling and Coring workshop was to begin making plans on how we can collect the data required by the project's principal investigators.
Novel Methods for Harvesting Solar Energy
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Novel Methods for Harvesting Solar Energy Novel Methods for Harvesting Solar Energy GrossmanFulv.png Model of a molecule that reversibly changes it structure when it absorbs light....
A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model
Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A
2009-03-03
Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.
McDeavitt, Sean; Shao, Lin; Tsvetkov, Pavel; Wirth, Brian; Kennedy, Rory
2014-04-07
Advanced fast reactor systems being developed under the DOE's Advanced Fuel Cycle Initiative are designed to destroy TRU isotopes generated in existing and future nuclear energy systems. Over the past 40 years, multiple experiments and demonstrations have been completed using U-Zr, U-Pu-Zr, U-Mo and other metal alloys. As a result, multiple empirical and semi-empirical relationships have been established to develop empirical performance modeling codes. Many mechanistic questions about fission as mobility, bubble coalescience, and gas release have been answered through industrial experience, research, and empirical understanding. The advent of modern computational materials science, however, opens new doors of development such that physics-based multi-scale models may be developed to enable a new generation of predictive fuel performance codes that are not limited by empiricism.
Institute for Multiscale Materials Studies
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
science and mechanics of soft, responsive, engineered materials. Activities combine theory, experiment, and numerical simulation of phenomena in soft materials spanning 7-14...
Using Advanced Modeling to Accelerate the Scale-Up of Carbon Capture Technologies
Miller, David; Sun, Xin; Storlie, Curtis; Bhattacharyya, Debangsu
2015-06-18
Carbon capture and storage (CCS) is one of many approaches that are critical for significantly reducing domestic and global CO2 emissions. The U.S. Department of Energy’s Clean Coal Technology Program Plan envisions 2nd generation CO2 capture technologies ready for demonstration-scale testing around 2020 with the goal of enabling commercial deployment by 2025 [1]. Third generation technologies have a similarly aggressive timeline. A major challenge is that the development and scale-up of new technologies in the energy sector historically takes up to 15 years to move from the laboratory to pre-deployment and another 20 to 30 years for widespread industrial scale deployment. In order to help meet the goals of the DOE carbon capture program, the Carbon Capture Simulation Initiative (CCSI) was launched in early 2011 to develop, demonstrate, and deploy advanced computational tools and validated multi-scale models to reduce the time required to develop and scale up new carbon capture technologies. The CCSI Toolset (1) enables promising concepts to be more quickly identified through rapid computational screening of processes and devices, (2) reduces the time to design and troubleshoot new devices and processes by using optimization techniques to focus development on the best overall process conditions and by using detailed device-scale models to better understand and improve the internal behavior of complex equipment, and (3) provides quantitative predictions of device and process performance during scale up based on rigorously validated smaller scale simulations that take into account model and parameter uncertainty[2]. This article focuses on essential elements related to the development and validation of multi-scale models in order to help minimize risk and maximize learning as new technologies progress from pilot to demonstration scale.
A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov-Maxwell system
Besse, Nicolas Latu, Guillaume Ghizzo, Alain Sonnendruecker, Eric Bertrand, Pierre
2008-08-10
In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to increase the local character in phase-space of the numerical scheme, by considering multiscale reconstruction with more compact support and by replacing the semi-Lagrangian method with more local - in space - numerical scheme as compact finite difference schemes, discontinuous-Galerkin method or finite element residual schemes which are well suited for parallel domain decomposition techniques.
Scott R. Reeves
2007-09-30
The primary goal of this project was to demonstrate a new and novel approach for high resolution, 3D reservoir characterization that can enable better management of CO{sub 2} enhanced oil recovery (EOR) projects and, looking to the future, carbon sequestration projects. The approach adopted has been the subject of previous research by the DOE and others, and relies primarily upon data-mining and advanced pattern recognition approaches. This approach honors all reservoir characterization data collected, but accepts that our understanding of how these measurements relate to the information of most interest, such as how porosity and permeability vary over a reservoir volume, is imperfect. Ideally the data needed for such an approach includes surface seismic to provide the greatest amount of data over the entire reservoir volume of interest, crosswell seismic to fill the resolution gap between surface seismic and wellbore-scale measurements, geophysical well logs to provide the vertical resolution sought, and core data to provide the tie to the information of most interest. These data are combined via a series of one or more relational models to enable, in its most successful application, the prediction of porosity and permeability on a vertical resolution similar to logs at each surface seismic trace location. In this project, the procedure was applied to the giant (and highly complex) SACROC unit of the Permian basin in West Texas, one of the world's largest CO{sub 2}-EOR projects and a potentially world-class geologic sequestration site. Due to operational scheduling considerations on the part of the operator of the field, the crosswell data was not obtained during the period of project performance (it is currently being collected however as part of another DOE project). This compromised the utility of the surface seismic data for the project due to the resolution gap between it and the geophysical well logs. An alternative approach was adopted that utilized a relational model to predict porosity and permeability profiles from well logs at each well location, and a 3D geostatistical variogram to generate the reservoir characterization over the reservoir volume of interest. A reservoir simulation model was built based upon this characterization and history-matched without making significant changes to it, thus validating the procedure. While not the same procedure as originally planned, the procedure ultimately employed proved successful and demonstrated that the general concepts proposed (i.e., data mining and advanced pattern recognition methods) have the flexibility to achieve the reservoir characterization objectives sought even with imperfect or incomplete data.
A. Alsaed
2004-09-14
The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality computational method will be used for evaluating the criticality potential of configurations of fissionable materials (in-package and external to the waste package) within the repository at Yucca Mountain, Nevada for all waste packages/waste forms. The criticality computational method is also applicable to preclosure configurations. The criticality computational method is a component of the methodology presented in ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003). How the criticality computational method fits in the overall disposal criticality analysis methodology is illustrated in Figure 1 (YMP 2003, Figure 3). This calculation will not provide direct input to the total system performance assessment for license application. It is to be used as necessary to determine the criticality potential of configuration classes as determined by the configuration probability analysis of the configuration generator model (BSC 2003a).
Modeling the formation and aging of secondary organic aerosols in Los Angeles during CalNex 2010
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hayes, P. L.; Carlton, A. G.; Baker, K. R.; Ahmadov, R.; Washenfelder, R. A.; Alvarez, S.; Rappenglück, B.; Gilman, J. B.; Kuster, W. C.; de Gouw, J. A.; et al
2014-12-20
Four different parameterizations for the formation and evolution of secondary organic aerosol (SOA) are evaluated using a 0-D box model representing the Los Angeles Metropolitan Region during the CalNex 2010 field campaign. We constrain the model predictions with measurements from several platforms and compare predictions with particle and gas-phase observations from the CalNex Pasadena ground site. That site provides a unique opportunity to study aerosol formation close to anthropogenic emission sources with limited recirculation. The model SOA formed only from the oxidation of VOCs (V-SOA) is insufficient to explain the observed SOA concentrations, even when using SOA parameterizations with multi-generationmore » oxidation that produce much higher yields than have been observed in chamber experiments, or when increasing yields to their upper limit estimates accounting for recently reported losses of vapors to chamber walls. The Community Multiscale Air Quality (WRF-CMAQ) model (version 5.0.1) provides excellent predictions of secondary inorganic particle species but underestimates the observed SOA mass by a factor of 25 when an older VOC-only parameterization is used, which is consistent with many previous model-measurement comparisons for pre-2007 anthropogenic SOA modules in urban areas. Including SOA from primary semi-volatile and intermediate volatility organic compounds (P-S/IVOCs) following the parameterizations of Robinson et al. (2007), Grieshop et al. (2009), or Pye and Seinfeld (2010) improves model/measurement agreement for mass concentration. When comparing the three parameterizations, the Grieshop et al. (2009) parameterization more accurately reproduces both the SOA mass concentration and oxygen-to-carbon ratio inside the urban area. Our results strongly suggest that other precursors besides VOCs, such as P-S/IVOCs, are needed to explain the observed SOA concentrations in Pasadena. All the parameterizations over-predict urban SOA formation at long photochemical ages (≈ 3 days) compared to observations from multiple sites, which can lead to problems in regional and global modeling. Among the explicitly modeled VOCs, the precursor compounds that contribute the greatest SOA mass are methylbenzenes. Polycyclic aromatic hydrocarbons (PAHs) are less important precursors and contribute less than 4% of the SOA mass. The amounts of SOA mass from diesel vehicles, gasoline vehicles, and cooking emissions are estimated to be 16–27, 35–61, and 19–35%, respectively, depending on the parameterization used, which is consistent with the observed fossil fraction of urban SOA, 71 (±3) %. In-basin biogenic VOCs are predicted to contribute only a few percent to SOA. A regional SOA background of approximately 2.1 μg m−3 is also present due to the long distance transport of highly aged OA. The percentage of SOA from diesel vehicle emissions is the same, within the estimated uncertainty, as reported in previous work that analyzed the weekly cycles in OA concentrations (Bahreini et al., 2012; Hayes et al., 2013). However, the modeling work presented here suggests a strong anthropogenic source of modern carbon in SOA, due to cooking emissions, which was not accounted for in those previous studies. Lastly, this work adapts a simple two-parameter model to predict SOA concentration and O/C from urban emissions. This model successfully predicts SOA concentration, and the optimal parameter combination is very similar to that found for Mexico City. This approach provides a computationally inexpensive method for predicting urban SOA in global and climate models. We estimate pollution SOA to account for 26 Tg yr−1 of SOA globally, or 17% of global SOA, 1/3 of which is likely to be non-fossil.« less
A DISLOCATION-BASED CLEAVAGE INITIATION MODEL FOR PRESSURE VESSEL
Cochran, Kristine B; Erickson, Marjorie A; Williams, Paul T; Klasky, Hilda B; Bass, Bennett Richard
2012-01-01
Efforts are under way to develop a theoretical, multi-scale model for the prediction of fracture toughness of ferritic steels in the ductile-to-brittle transition temperature (DBTT) region that accounts for temperature, irradiation, strain rate, and material condition (chemistry and heat treatment) effects. This new model is intended to address difficulties associated with existing empirically-derived models of the DBTT region that cannot be extrapolated to conditions for which data are unavailable. Dislocation distribution equations, derived from the theories of Yokobori et al., are incorporated to account for the local stress state prior to and following initiation of a microcrack from a second-phase particle. The new model is the basis for the DISlocation-based FRACture (DISFRAC) computer code being developed at the Oak Ridge National Laboratory (ORNL). The purpose of this code is to permit fracture safety assessments of ferritic structures with only tensile properties required as input. The primary motivation for the code is to assist in the prediction of radiation effects on nuclear reactor pressure vessels, in parallel with the EURATOM PERFORM 60 project.
Dynamic mesoscale model of dipolar fluids via fluctuating hydrodynamics
Persson, Rasmus A. X.; Chu, Jhih-Wei, E-mail: jwchu@nctu.edu.tw [Institute of Bioinformatics and Systems Biology, National Chiao Tung University, Hsinchu 30068, Taiwan (China); Department of Biological Science and Technology, National Chiao Tung University, Hsinchu 30068, Taiwan (China); Voulgarakis, Nikolaos K. [Department of Mathematics, Washington State University, Richland, Washington 99372 (United States)
2014-11-07
Fluctuating hydrodynamics (FHD) is a general framework of mesoscopic modeling and simulation based on conservational laws and constitutive equations of linear and nonlinear responses. However, explicit representation of electrical forces in FHD has yet to appear. In this work, we devised an Ansatz for the dynamics of dipole moment densities that is linked with the Poisson equation of the electrical potential ? in coupling to the other equations of FHD. The resulting ?-FHD equations then serve as a platform for integrating the essential forces, including electrostatics in addition to hydrodynamics, pressure-volume equation of state, surface tension, and solvent-particle interactions that govern the emergent behaviors of molecular systems at an intermediate scale. This unique merit of ?-FHD is illustrated by showing that the water dielectric function and ion hydration free energies in homogeneous and heterogenous systems can be captured accurately via the mesoscopic simulation. Furthermore, we show that the field variables of ?-FHD can be mapped from the trajectory of an all-atom molecular dynamics simulation such that model development and parametrization can be based on the information obtained at a finer-grained scale. With the aforementioned multiscale capabilities and a spatial resolution as high as 5 , the ?-FHD equations represent a useful semi-explicit solvent model for the modeling and simulation of complex systems, such as biomolecular machines and nanofluidics.
Engineered Barrier System Degradation, Flow, and Transport Process Model Report
E.L. Hardin
2000-07-17
The Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is one of nine PMRs supporting the Total System Performance Assessment (TSPA) being developed by the Yucca Mountain Project for the Site Recommendation Report (SRR). The EBS PMR summarizes the development and abstraction of models for processes that govern the evolution of conditions within the emplacement drifts of a potential high-level nuclear waste repository at Yucca Mountain, Nye County, Nevada. Details of these individual models are documented in 23 supporting Analysis/Model Reports (AMRs). Nineteen of these AMRs are for process models, and the remaining 4 describe the abstraction of results for application in TSPA. The process models themselves cluster around four major topics: ''Water Distribution and Removal Model, Physical and Chemical Environment Model, Radionuclide Transport Model, and Multiscale Thermohydrologic Model''. One AMR (Engineered Barrier System-Features, Events, and Processes/Degradation Modes Analysis) summarizes the formal screening analysis used to select the Features, Events, and Processes (FEPs) included in TSPA and those excluded from further consideration. Performance of a potential Yucca Mountain high-level radioactive waste repository depends on both the natural barrier system (NBS) and the engineered barrier system (EBS) and on their interactions. Although the waste packages are generally considered as components of the EBS, the EBS as defined in the EBS PMR includes all engineered components outside the waste packages. The principal function of the EBS is to complement the geologic system in limiting the amount of water contacting nuclear waste. A number of alternatives were considered by the Project for different EBS designs that could provide better performance than the design analyzed for the Viability Assessment. The design concept selected was Enhanced Design Alternative II (EDA II).
Review of Upscaling Methods for Describing Unsaturated Flow
Wood, Brian D.
2000-09-26
Representing samll-scale features can be a challenge when one wants to model unsaturated flow in large domains. In this report, the various upscaling techniques are reviewed. The following upscaling methods have been identified from the literature: stochastic methods, renormalization methods, volume averaging and homogenization methods. In addition, a final technique, full resolution numerical modeling, is also discussed.
Water Distribution and Removal Model
Y. Deng; N. Chipman; E.L. Hardin
2005-08-26
The design of the Yucca Mountain high level radioactive waste repository depends on the performance of the engineered barrier system (EBS). To support the total system performance assessment (TSPA), the Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is developed to describe the thermal, mechanical, chemical, hydrological, biological, and radionuclide transport processes within the emplacement drifts, which includes the following major analysis/model reports (AMRs): (1) EBS Water Distribution and Removal (WD&R) Model; (2) EBS Physical and Chemical Environment (P&CE) Model; (3) EBS Radionuclide Transport (EBS RNT) Model; and (4) EBS Multiscale Thermohydrologic (TH) Model. Technical information, including data, analyses, models, software, and supporting documents will be provided to defend the applicability of these models for their intended purpose of evaluating the postclosure performance of the Yucca Mountain repository system. The WD&R model ARM is important to the site recommendation. Water distribution and removal represents one component of the overall EBS. Under some conditions, liquid water will seep into emplacement drifts through fractures in the host rock and move generally downward, potentially contacting waste packages. After waste packages are breached by corrosion, some of this seepage water will contact the waste, dissolve or suspend radionuclides, and ultimately carry radionuclides through the EBS to the near-field host rock. Lateral diversion of liquid water within the drift will occur at the inner drift surface, and more significantly from the operation of engineered structures such as drip shields and the outer surface of waste packages. If most of the seepage flux can be diverted laterally and removed from the drifts before contacting the wastes, the release of radionuclides from the EBS can be controlled, resulting in a proportional reduction in dose release at the accessible environment. The purposes of this WD&R model (CRWMS M&O 2000b) are to quantify and evaluate the distribution and drainage of seepage water within emplacement drifts during the period of compliance for post-closure performance. The model bounds the fraction of water entering the drift that will be prevented from contacting the waste by the combined effects of engineered controls on water distribution and on water removal. For example, water can be removed during pre-closure operation by ventilation and after closure by natural drainage into the fractured rock. Engineered drains could be used, if demonstrated to be necessary and effective, to ensure that adequate drainage capacity is provided. This report provides the screening arguments for certain Features, Events, and Processes (FEPs) that are related to water distribution and removal in the EBS. Applicable acceptance criteria from the Issue Resolution Status Reports (IRSRs) developed by the U.S. Nuclear Regulatory Commission (NRC 1999a; 1999b; 1999c; and 1999d) are also addressed in this document.
Computational Physics and Methods
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
... for use in Advanced Strategic Computing codes Theory and modeling of dense plasmas in ICF and astrophysics environments Theory and modeling of astrophysics in support of NASA ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Material Point Methods and Multiphysics for Fracture and Multiphase Problems Joseph Teran, UCLA and Alice Koniges, LBL Contact: jteran@math.ucla.edu Material point methods (MPM) ...
Methods for pretreating biomass
Balan, Venkatesh; Dale, Bruce E; Chundawat, Shishir; Sousa, Leonardo
2015-03-03
A method of alkaline pretreatment of biomass, in particular, pretreating biomass with gaseous ammonia.
Methods for resistive switching of memristors
Mickel, Patrick R.; James, Conrad D.; Lohn, Andrew; Marinella, Matthew; Hsia, Alexander H.
2016-05-10
The present invention is directed generally to resistive random-access memory (RRAM or ReRAM) devices and systems, as well as methods of employing a thermal resistive model to understand and determine switching of such devices. In particular example, the method includes generating a power-resistance measurement for the memristor device and applying an isothermal model to the power-resistance measurement in order to determine one or more parameters of the device (e.g., filament state).
Hou, Zhangshuan; Makarov, Yuri V.; Samaan, Nader A.; Etingov, Pavel V.
2013-03-19
Given the multi-scale variability and uncertainty of wind generation and forecast errors, it is a natural choice to use time-frequency representation (TFR) as a view of the corresponding time series represented over both time and frequency. Here we use wavelet transform (WT) to expand the signal in terms of wavelet functions which are localized in both time and frequency. Each WT component is more stationary and has consistent auto-correlation pattern. We combined wavelet analyses with time series forecast approaches such as ARIMA, and tested the approach at three different wind farms located far away from each other. The prediction capability is satisfactory -- the day-ahead prediction of errors match the original error values very well, including the patterns. The observations are well located within the predictive intervals. Integrating our wavelet-ARIMA (stochastic) model with the weather forecast model (deterministic) will improve our ability significantly to predict wind power generation and reduce predictive uncertainty.
Geophysical Methods | Open Energy Information
Methods Magnetic Methods Gravity Methods Radiometric Methods Seismic methods dominates oil and gas exploration, and probably accounts for over 80% of exploration dollars spent...
Multipole expansion method for supernova neutrino oscillations
Duan, Huaiyu; Shalgar, Shashank, E-mail: duan@unm.edu, E-mail: shashankshalgar@unm.edu [Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131 (United States)
2014-10-01
We demonstrate a multipole expansion method to calculate collective neutrino oscillations in supernovae using the neutrino bulb model. We show that it is much more efficient to solve multi-angle neutrino oscillations in multipole basis than in angle basis. The multipole expansion method also provides interesting insights into multi-angle calculations that were accomplished previously in angle basis.
Mathematical Formulation Requirements and Specifications for the Process Models
Steefel, C.; Moulton, D.; Pau, G.; Lipnikov, K.; Meza, J.; Lichtner, P.; Wolery, T.; Bacon, D.; Spycher, N.; Bell, J.; Moridis, G.; Yabusaki, S.; Sonnenthal, E.; Zyvoloski, G.; Andre, B.; Zheng, L.; Davis, J.
2010-11-01
The Advanced Simulation Capability for Environmental Management (ASCEM) is intended to be a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM program is aimed at addressing critical EM program needs to better understand and quantify flow and contaminant transport behavior in complex geological systems. It will also address the long-term performance of engineered components including cementitious materials in nuclear waste disposal facilities, in order to reduce uncertainties and risks associated with DOE EM's environmental cleanup and closure activities. Building upon national capabilities developed from decades of Research and Development in subsurface geosciences, computational and computer science, modeling and applied mathematics, and environmental remediation, the ASCEM initiative will develop an integrated, open-source, high-performance computer modeling system for multiphase, multicomponent, multiscale subsurface flow and contaminant transport. This integrated modeling system will incorporate capabilities for predicting releases from various waste forms, identifying exposure pathways and performing dose calculations, and conducting systematic uncertainty quantification. The ASCEM approach will be demonstrated on selected sites, and then applied to support the next generation of performance assessments of nuclear waste disposal and facility decommissioning across the EM complex. The Multi-Process High Performance Computing (HPC) Simulator is one of three thrust areas in ASCEM. The other two are the Platform and Integrated Toolsets (dubbed the Platform) and Site Applications. The primary objective of the HPC Simulator is to provide a flexible and extensible computational engine to simulate the coupled processes and flow scenarios described by the conceptual models developed using the ASCEM Platform. The graded and iterative approach to assessments naturally generates a suite of conceptual models that span a range of process complexity, potentially coupling hydrological, biogeochemical, geomechanical, and thermal processes. The Platform will use ensembles of these simulations to quantify the associated uncertainty, sensitivity, and risk. The Process Models task within the HPC Simulator focuses on the mathematical descriptions of the relevant physical processes.
End Points Specification Methods
Broader source: Energy.gov [DOE]
Two methods to develop end point specifications are presented. These have evolved from use in the field for deactivation projects.
Adams, David P; McDonald, Joel Patrick; Jared, Bradley Howell; Hodges, V. Carter; Hirschfeld, Deidre; Blair, Dianna S
2014-04-01
A method of pulsed laser intrinsic marking can provide a unique identifier to detect tampering or counterfeiting.
Geobacteraceae strains and methods
Lovley, Derek R.; Nevin, Kelly P.; Yi, Hana
2015-07-07
Embodiments of the present invention provide a method of producing genetically modified strains of electricigenic microbes that are specifically adapted for the production of electrical current in microbial fuel cells, as well as strains produced by such methods and fuel cells using such strains. In preferred embodiments, the present invention provides genetically modified strains of Geobacter sulfurreducens and methods of using such strains.
Enterprise Risk Management Model
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Model The Enterprise Risk Management (ERM) Model is a system used to analyze the cost and benefit of addressing risks inherent in the work performed by the Department of Energy. This system measures risk using a combination of qualitative and quantitative methods to set a standard method for analyzing risk across the many functions within the department. Risks generally fall within five categories regardless ofthe subject matter ofthe subsystem. These categories are (1) risks to people, (2)
Pore Scale Modeling of the Reactive Transport of Chromium in the Cathode of a Solid Oxide Fuel Cell
Ryan, Emily M.; Tartakovsky, Alexandre M.; Recknagle, Kurtis P.; Khaleel, Mohammad A.; Amon, Cristina
2011-01-01
We present a pore scale model of a solid oxide fuel cell (SOFC) cathode. Volatile chromium species are known to migrate from the current collector of the SOFC into the cathode where over time they decrease the voltage output of the fuel cell. A pore scale model is used to investigate the reactive transport of chromium species in the cathode and to study the driving forces of chromium poisoning. A multi-scale modeling approach is proposed which uses a cell level model of the cathode, air channel and current collector to determine the boundary conditions for a pore scale model of a section of the cathode. The pore scale model uses a discrete representation of the cathode to explicitly model the surface reactions of oxygen and chromium with a cathode material. The pore scale model is used to study the reaction mechanisms of chromium by considering the effects of reaction rates, diffusion coefficients, chromium vaporization, and oxygen consumption on chromiums deposition in the cathode. The study shows that chromium poisoning is most significantly affected by the chromium reaction rates in the cathode and that the reaction rates are a function of the local current density in the cathode.
Utility Solar Generation Valuation Methods
Hansen, Thomas N.; Dion, Phillip J.
2009-06-30
Tucson Electric Power (TEP) developed, tested and verified the results of a new and appropriate method for accurately evaluating the capacity credit of time variant solar generating sources and reviewed new methods to appropriately and fairly evaluate the value of solar generation to electric utilities. The project also reviewed general integrated approaches for adequately compensating owners of solar generation for their benefits to utilities. However, given the limited funding support and time duration of this project combined with the significant differences between utilities regarding rate structures, solar resource availability and coincidence of solar generation with peak load periods, it is well beyond the scope of this project to develop specific rate, rebate, and interconnection approaches to capture utility benefits for all possible utilities. The project developed computer software based evaluation method models to compare solar generation production data measured in very short term time increments called Sample Intervals over a typical utility Dispatch Cycle during an Evaluation Period against utility system load data. Ten second resolution generation production data from the SGSSS and actual one minute resolution TEP system load data for 2006 and 2007, along with data from the Pennington Street Garage 60 kW DC capacity solar unit installed in downtown Tucson will be applied to the model for testing and verification of the evaluation method. Data was provided by other utilities, but critical time periods of data were missing making results derived from that data inaccurate. The algorithms are based on previous analysis and review of specific 2005 and 2006 SGSSS production data. The model was built, tested and verified by in house TEP personnel. For this phase of the project, TEP communicated with, shared solar production data with and collaborated on the development of solar generation valuation tools with other utilities, including Arizona Public Service, Salt River Project, Xcel and Nevada Power Company as well as the Arizona electric cooperatives. In the second phase of the project, three years of 10 second power output data of the SGSSS was used to evaluate the effectiveness of frequency domain analysis, normal statistical distribution analysis and finally maximum/minimum differential output analysis to test the applicability of these mathematic methods in accurately modeling the output variations produced by clouds passing over the SGSSS array.
Fossion, Ruben [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Apartado Postal 70-543, Mexico D. F., C.P. 04510 (Mexico)
2010-09-10
The atomic nucleus is a typical example of a many-body problem. On the one hand, the number of nucleons (protons and neutrons) that constitute the nucleus is too large to allow for exact calculations. On the other hand, the number of constituent particles is too small for the individual nuclear excitation states to be explained by statistical methods. Another problem, particular for the atomic nucleus, is that the nucleon-nucleon (n-n) interaction is not one of the fundamental forces of Nature, and is hard to put in a single closed equation. The nucleon-nucleon interaction also behaves differently between two free nucleons (bare interaction) and between two nucleons in the nuclear medium (dressed interaction).Because of the above reasons, specific nuclear many-body models have been devised of which each one sheds light on some selected aspects of nuclear structure. Only combining the viewpoints of different models, a global insight of the atomic nucleus can be gained. In this chapter, we revise the the Nuclear Shell Model as an example of the microscopic approach, and the Collective Model as an example of the geometric approach. Finally, we study the statistical properties of nuclear spectra, basing on symmetry principles, to find out whether there is quantum chaos in the atomic nucleus. All three major approaches have been rewarded with the Nobel Prize of Physics. In the text, we will stress how each approach introduces its own series of approximations to reduce the prohibitingly large number of degrees of freedom of the full many-body problem to a smaller manageable number of effective degrees of freedom.
The Uniform Methods Project: Methods for Determining Energy Efficiency...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures The Uniform Methods Project: Methods for Determining Energy Efficiency Savings...
The Uniform Methods Project: Methods for Determining Energy Efficiency...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures (April 2013) The Uniform Methods Project: Methods for Determining Energy...
Jablonowski, Christiane
2015-07-14
The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.
Exascale Co-design for Modeling Materials in Extreme Environments
Germann, Timothy C.
2014-07-08
Computational materials science has provided great insight into the response of materials under extreme conditions that are difficult to probe experimentally. For example, shock-induced plasticity and phase transformation processes in single-crystal and nanocrystalline metals have been widely studied via large-scale molecular dynamics simulations, and many of these predictions are beginning to be tested at advanced 4th generation light sources such as the Advanced Photon Source (APS) and Linac Coherent Light Source (LCLS). I will describe our simulation predictions and their recent verification at LCLS, outstanding challenges in modeling the response of materials to extreme mechanical and radiation environments, and our efforts to tackle these as part of the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx). ExMatEx has initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. We anticipate that we will be able to exploit hierarchical, heterogeneous architectures to achieve more realistic large-scale simulations with adaptive physics refinement, and are using tractable application scale-bridging proxy application testbeds to assess new approaches and requirements. Such current scale-bridging strategies accumulate (or recompute) a distributed response database from fine-scale calculations, in a top-down rather than bottom-up multiscale approach.
Ginting, Victor
2014-03-15
it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.
Method of degrading trinitrotoluene
Tyndall, R.L.; Vass, A.
1996-01-16
A method is disclosed of eluting trinitrotoluene (TNT) from soil using a dispersant from bacterial intra-amoebic isolate 1s, ATCC 75229.
Method for making tetraorganooxysilanes
Schattenmann, Florian Johannes; Lewis, Larry Neil
2001-01-01
A method for the preparation of tetraorganooxysilanes is provided which comprises reaction of a natural silicon dioxide source with an organo carbonate.
Method of degrading trinitrotoluene
Tyndall, Richard L. (Clinton, TN); Vass, Arpad (Oak Ridge, TN)
1996-01-01
A method of eluting trinitrotoluene (TNT) from soil using a dispersant from bacterial intra-amoebic isolate 1s, ATCC 75229.
Method and Apparatus for High-Efficiency Direct Contact Condensation...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
steam during the removal of non-condensable gases and creating high back pressures that decreased turbine performance. ... modeling method for predicting the chemical, physical, ...
On the validation of seismic imaging methods: Finite frequency...
Office of Scientific and Technical Information (OSTI)
Title: On the validation of seismic imaging methods: Finite frequency or ray theory? We ... approach for state of the art seismic models developed for western North America. ...
Prusa, Joseph
2012-05-08
This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAGâ??s advanced dynamics core with the â??physicsâ? of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer- reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited.
Martin, F.S.; Silver, G.L.
1991-04-30
A method is described for reducing the concentration of any undesirable metals dissolved in contaminated water, such as waste water. The method involves uniformly reacting the contaminated water with an excess amount of solid particulate calcium sulfite to insolubilize the undesirable metal ions, followed by removal thereof and of the unreacted calcium sulfite.
Martin, Frank S. (Farmersville, OH); Silver, Gary L. (Centerville, OH)
1991-04-30
A method for reducing the concentration of any undesirable metals dissolved in contaminated water, such as waste water. The method involves uniformly reacting the contaminated water with an excess amount of solid particulate calcium sulfite to insolubilize the undesirable metal ions, followed by removal thereof and of the unreacted calcium sulfite.
Method of forming nanodielectrics
Tuncer, Enis [Knoxville, TN; Polyzos, Georgios [Oak Ridge, TN
2014-01-07
A method of making a nanoparticle filled dielectric material. The method includes mixing nanoparticle precursors with a polymer material and reacting the nanoparticle mixed with the polymer material to form nanoparticles dispersed within the polymer material to form a dielectric composite.
Chainer, Timothy J.; Dang, Hien P.; Parida, Pritish R.; Schultz, Mark D.; Sharma, Arun
2015-08-11
A method aspect for removing heat from a data center may use liquid coolant cooled without vapor compression refrigeration on a liquid cooled information technology equipment rack. The method may also include regulating liquid coolant flow to the data center through a range of liquid coolant flow values with a controller-apparatus based upon information technology equipment temperature threshold of the data center.
Methods for data classification
Garrity, George; Lilburn, Timothy G.
2011-10-11
The present invention provides methods for classifying data and uncovering and correcting annotation errors. In particular, the present invention provides a self-organizing, self-correcting algorithm for use in classifying data. Additionally, the present invention provides a method for classifying biological taxa.
Decker, David L; Lyles, Brad F; Purcell, Richard G; Hershey, Ronald Lee
2014-05-20
An apparatus and method for supporting a tubing bundle during installation or removal. The apparatus includes a clamp for securing the tubing bundle to an external wireline. The method includes deploying the tubing bundle and wireline together, The tubing bundle is periodically secured to the wireline using a clamp.
Moffat, Harry K.; Noble, David R.; Baer, Thomas A.; Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann
2008-09-01
In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.
Jackson, D.D.; Hollen, R.M.
1981-02-27
A method of very thoroughly and quikcly cleaning a guaze electrode used in chemical analyses is given, as well as an automobile cleaning apparatus which makes use of the method. The method generates very little waste solution, and this is very important in analyzing radioactive materials, especially in aqueous solutions. The cleaning apparatus can be used in a larger, fully automated controlled potential coulometric apparatus. About 99.98% of a 5 mg plutonium sample was removed in less than 3 minutes, using only about 60 ml of rinse solution and two main rinse steps.