Modeling Crack Propagation in Polycrystalline Microstructure Using Variational Multiscale Method
Sun, S.; Sundararaghavan, V.
2016-01-01
Crack propagation in a polycrystalline microstructure is analyzed using a novel multiscale model. The model includes an explicit microstructural representation at critical regions (stress concentrators such as notches and cracks) and a reduced order model that statistically captures the microstructure at regions far away from stress concentrations. Crack propagation is modeled in these critical regions using the variational multiscale method. In this approach, a discontinuous displacement field is added to elements that exceed the critical values of normal or tangential tractions during loading. Compared to traditional cohesive zone modeling approaches, the method does not require the use of any specialmore » interface elements in the microstructure and thus can model arbitrary crack paths. The capability of the method in predicting both intergranular and transgranular failure modes in an elastoplastic polycrystal is demonstrated under tensile and three-point bending loads.« less
Multiscale Subsurface Biogeochemical Modeling
U.S. Department of Energy (DOE) all webpages (Extended Search)
Biogeochemical Modeling Multiscale Subsurface Biogeochemical Modeling ScheibeSmaller.jpg Simulation of flow inside an experimental packed bed, performed on Franklin Key...
Peridynamic Multiscale Finite Element Methods
Costa, Timothy; Bond, Stephen D.; Littlewood, David John; Moore, Stan Gerald
2015-12-01
The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic and local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the
Multiscale Thermohydrologic Model
T. Buscheck
2004-10-12
The purpose of the multiscale thermohydrologic model (MSTHM) is to predict the possible range of thermal-hydrologic conditions, resulting from uncertainty and variability, in the repository emplacement drifts, including the invert, and in the adjoining host rock for the repository at Yucca Mountain. Thus, the goal is to predict the range of possible thermal-hydrologic conditions across the repository; this is quite different from predicting a single expected thermal-hydrologic response. The MSTHM calculates the following thermal-hydrologic parameters: temperature, relative humidity, liquid-phase saturation, evaporation rate, air-mass fraction, gas-phase pressure, capillary pressure, and liquid- and gas-phase fluxes (Table 1-1). These thermal-hydrologic parameters are required to support ''Total System Performance Assessment (TSPA) Model/Analysis for the License Application'' (BSC 2004 [DIRS 168504]). The thermal-hydrologic parameters are determined as a function of position along each of the emplacement drifts and as a function of waste package type. These parameters are determined at various reference locations within the emplacement drifts, including the waste package and drip-shield surfaces and in the invert. The parameters are also determined at various defined locations in the adjoining host rock. The MSTHM uses data obtained from the data tracking numbers (DTNs) listed in Table 4.1-1. The majority of those DTNs were generated from the following analyses and model reports: (1) ''UZ Flow Model and Submodels'' (BSC 2004 [DIRS 169861]); (2) ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004); (3) ''Calibrated Properties Model'' (BSC 2004 [DIRS 169857]); (4) ''Thermal Conductivity of the Potential Repository Horizon'' (BSC 2004 [DIRS 169854]); (5) ''Thermal Conductivity of the Non-Repository Lithostratigraphic Layers'' (BSC 2004 [DIRS 170033]); (6) ''Ventilation Model and Analysis Report'' (BSC 2004 [DIRS 169862]); (7) ''Heat Capacity
Wagner, Gregory John; Collis, Samuel Scott; Templeton, Jeremy Alan; Lehoucq, Richard B.; Parks, Michael L.; Jones, Reese E.; Silling, Stewart Andrew; Scovazzi, Guglielmo; Bochev, Pavel B.
2007-10-01
This report is a collection of documents written as part of the Laboratory Directed Research and Development (LDRD) project A Mathematical Framework for Multiscale Science and Engineering: The Variational Multiscale Method and Interscale Transfer Operators. We present developments in two categories of multiscale mathematics and analysis. The first, continuum-to-continuum (CtC) multiscale, includes problems that allow application of the same continuum model at all scales with the primary barrier to simulation being computing resources. The second, atomistic-to-continuum (AtC) multiscale, represents applications where detailed physics at the atomistic or molecular level must be simulated to resolve the small scales, but the effect on and coupling to the continuum level is frequently unclear.
MULTISCALE THERMOHYDROLOGIC MODEL
T. Buscheck
2005-07-07
The intended purpose of the multiscale thermohydrologic model (MSTHM) is to predict the possible range of thermal-hydrologic conditions, resulting from uncertainty and variability, in the repository emplacement drifts, including the invert, and in the adjoining host rock for the repository at Yucca Mountain. The goal of the MSTHM is to predict a reasonable range of possible thermal-hydrologic conditions within the emplacement drift. To be reasonable, this range includes the influence of waste-package-to-waste-package heat output variability relevant to the license application design, as well as the influence of uncertainty and variability in the geologic and hydrologic conditions relevant to predicting the thermal-hydrologic response in emplacement drifts. This goal is quite different from the goal of a model to predict a single expected thermal-hydrologic response. As a result, the development and validation of the MSTHM and the associated analyses using this model are focused on the goal of predicting a reasonable range of thermal-hydrologic conditions resulting from parametric uncertainty and waste-package-to-waste-package heat-output variability. Thermal-hydrologic conditions within emplacement drifts depend primarily on thermal-hydrologic conditions in the host rock at the drift wall and on the temperature difference between the drift wall and the drip-shield and waste-package surfaces. Thus, the ability to predict a reasonable range of relevant in-drift MSTHM output parameters (e.g., temperature and relative humidity) is based on valid predictions of thermal-hydrologic processes in the host rock, as well as valid predictions of heat-transfer processes between the drift wall and the drip-shield and waste-package surfaces. Because the invert contains crushed gravel derived from the host rock, the invert is, in effect, an extension of the host rock, with thermal and hydrologic properties that have been modified by virtue of the crushing (and the resulting
X. Frank Xu
2010-03-30
Multiscale modeling of stochastic systems, or uncertainty quantization of multiscale modeling is becoming an emerging research frontier, with rapidly growing engineering applications in nanotechnology, biotechnology, advanced materials, and geo-systems, etc. While tremendous efforts have been devoted to either stochastic methods or multiscale methods, little combined work had been done on integration of multiscale and stochastic methods, and there was no method formally available to tackle multiscale problems involving uncertainties. By developing an innovative Multiscale Stochastic Finite Element Method (MSFEM), this research has made a ground-breaking contribution to the emerging field of Multiscale Stochastic Modeling (MSM) (Fig 1). The theory of MSFEM basically decomposes a boundary value problem of random microstructure into a slow scale deterministic problem and a fast scale stochastic one. The slow scale problem corresponds to common engineering modeling practices where fine-scale microstructure is approximated by certain effective constitutive constants, which can be solved by using standard numerical solvers. The fast scale problem evaluates fluctuations of local quantities due to random microstructure, which is important for scale-coupling systems and particularly those involving failure mechanisms. The Green-function-based fast-scale solver developed in this research overcomes the curse-of-dimensionality commonly met in conventional approaches, by proposing a random field-based orthogonal expansion approach. The MSFEM formulated in this project paves the way to deliver the first computational tool/software on uncertainty quantification of multiscale systems. The applications of MSFEM on engineering problems will directly enhance our modeling capability on materials science (composite materials, nanostructures), geophysics (porous media, earthquake), biological systems (biological tissues, bones, protein folding). Continuous development of MSFEM will
Multi-Scale Multi-physics Methods Development for the Calculation...
Office of Scientific and Technical Information (OSTI)
by developing multi-scale, multi-physics methods and implementing them within the ... suitable scales for a physical and mathematical model and then deriving and applying ...
Towards a Multiscale Approach to Cybersecurity Modeling
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO_{2} and comparing the predictions with experiments.
A multiscale two-point flux-approximation method
Myner, Olav Lie, Knut-Andreas
2014-10-15
A large number of multiscale finite-volume methods have been developed over the past decade to compute conservative approximations to multiphase flow problems in heterogeneous porous media. In particular, several iterative and algebraic multiscale frameworks that seek to reduce the fine-scale residual towards machine precision have been presented. Common for all such methods is that they rely on a compatible primaldual coarse partition, which makes it challenging to extend them to stratigraphic and unstructured grids. Herein, we propose a general idea for how one can formulate multiscale finite-volume methods using only a primal coarse partition. To this end, we use two key ingredients that are computed numerically: (i) elementary functions that correspond to flow solutions used in transmissibility upscaling, and (ii) partition-of-unity functions used to combine elementary functions into basis functions. We exemplify the idea by deriving a multiscale two-point flux-approximation (MsTPFA) method, which is robust with regards to strong heterogeneities in the permeability field and can easily handle general grids with unstructured fine- and coarse-scale connections. The method can easily be adapted to arbitrary levels of coarsening, and can be used both as a standalone solver and as a preconditioner. Several numerical experiments are presented to demonstrate that the MsTPFA method can be used to solve elliptic pressure problems on a wide variety of geological models in a robust and efficient manner.
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO2 and comparing themore » predictions with experiments.« less
Final Report for Integrated Multiscale Modeling of Molecular Computing Devices
Glotzer, Sharon C.
2013-08-28
In collaboration with researchers at Vanderbilt University, North Carolina State University, Princeton and Oakridge National Laboratory we developed multiscale modeling and simulation methods capable of modeling the synthesis, assembly, and operation of molecular electronics devices. Our role in this project included the development of coarse-grained molecular and mesoscale models and simulation methods capable of simulating the assembly of millions of organic conducting molecules and other molecular components into nanowires, crossbars, and other organized patterns.
Multiscale Concrete Modeling of Aging Degradation
Hammi, Yousseff; Gullett, Philipp; Horstemeyer, Mark F.
2015-07-31
In this work a numerical finite element framework is implemented to enable the integration of coupled multiscale and multiphysics transport processes. A User Element subroutine (UEL) in Abaqus is used to simultaneously solve stress equilibrium, heat conduction, and multiple diffusion equations for 2D and 3D linear and quadratic elements. Transport processes in concrete structures and their degradation mechanisms are presented along with the discretization of the governing equations. The multiphysics modeling framework is theoretically extended to the linear elastic fracture mechanics (LEFM) by introducing the eXtended Finite Element Method (XFEM) and based on the XFEM user element implementation of Giner et al. [2009]. A damage model that takes into account the damage contribution from the different degradation mechanisms is theoretically developed. The total contribution of damage is forwarded to a Multi-Stage Fatigue (MSF) model to enable the assessment of the fatigue life and the deterioration of reinforced concrete structures in a nuclear power plant. Finally, two examples are presented to illustrate the developed multiphysics user element implementation and the XFEM implementation of Giner et al. [2009].
Moist multi-scale models for the hurricane embryo
Majda, Andrew J.; Xing, Yulong; Mohammadian, Majid
2010-01-01
Determining the finite-amplitude preconditioned states in the hurricane embryo, which lead to tropical cyclogenesis, is a central issue in contemporary meteorology. In the embryo there is competition between different preconditioning mechanisms involving hydrodynamics and moist thermodynamics, which can lead to cyclogenesis. Here systematic asymptotic methods from applied mathematics are utilized to develop new simplified moist multi-scale models starting from the moist anelastic equations. Three interesting multi-scale models emerge in the analysis. The balanced mesoscale vortex (BMV) dynamics and the microscale balanced hot tower (BHT) dynamics involve simplified balanced equations without gravity waves for vertical vorticity amplification due to moist heat sources and incorporate nonlinear advective fluxes across scales. The BMV model is the central one for tropical cyclogenesis in the embryo. The moist mesoscale wave (MMW) dynamics involves simplified equations for mesoscale moisture fluctuations, as well as linear hydrostatic waves driven by heat sources from moisture and eddy flux divergences. A simplified cloud physics model for deep convection is introduced here and used to study moist axisymmetric plumes in the BHT model. A simple application in periodic geometry involving the effects of mesoscale vertical shear and moist microscale hot towers on vortex amplification is developed here to illustrate features of the coupled multi-scale models. These results illustrate the use of these models in isolating key mechanisms in the embryo in a simplified content.
A multilevel multiscale mimetic method for an anisotropic infiltration problem
Lipnikov, Konstantin; Moulton, David; Svyatskiy, Daniil
2009-01-01
Modeling of multiphase flow and transport in highly heterogeneous porous media must capture a broad range of coupled spatial and temporal scales. Recently, a hierarchical approach dubbed the Multilevel Multiscale Mimetic (M3) method, was developed to simulate two-phase flow in porous media. The M{sup 3} method is locally mass conserving at all levels in its hierarchy, it supports unstructured polygonal grids and full tensor permeabilities, and it can achieve large coarsening factors. In this work we consider infiltration of water into a two-dimensional layered medium. The grid is aligned with the layers but not the coordinate axes. We demonstrate that with an efficient temporal updating strategy for the coarsening parameters, fine-scale accuracy of prominent features in the flow is maintained by the M{sup 3} method.
Towards a Multiscale Approach to Cybersecurity Modeling (Conference...
Office of Scientific and Technical Information (OSTI)
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key ...
Multiscale Modeling of Energy Storage Materials | Argonne Leadership...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Lithium ions are shown as spheres. Gregory A. Voth Multiscale Modeling of Energy Storage ... listed the transformation of the energy system of the country as a central goal to ...
Multiscale Modeling of Energy Storage Materials | Argonne Leadership
U.S. Department of Energy (DOE) all webpages (Extended Search)
Computing Facility Multiscale Modeling of Energy Storage Materials PI Name: Gregory A. Voth PI Email: gavoth@uchicago.edu Institution: University of Chicago and Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 25,000,000 Year: 2012 Research Domain: Materials Science The leadership-class computing resources provided by the INCITE program will be used for the multiscale modeling of charge transport processes in materials relevant to fuel cell and battery
A bidirectional coupling procedure applied to multiscale respiratory modeling
Kuprat, A.P.; Kabilan, S.; Carson, J.P.; Corley, R.A.; Einstein, D.R.
2013-07-01
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural
A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling
Kuprat, Andrew P.; Kabilan, Senthil; Carson, James P.; Corley, Richard A.; Einstein, Daniel R.
2013-07-01
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton’s Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple
Gao, Kai; Fu, Shubin; Gibson, Richard L.; Chung, Eric T.; Efendiev, Yalchin
2015-08-15
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale medium property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.
Gao, Kai; Fu, Shubin; Gibson, Richard L.; Chung, Eric T.; Efendiev, Yalchin
2015-04-14
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale medium property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.
Gao, Kai; Fu, Shubin; Gibson, Richard L.; Chung, Eric T.; Efendiev, Yalchin
2015-04-14
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less
Smith, Kandler; Graf, Peter; Jun, Myungsoo; Yang, Chuanbo; Li, Genong; Li, Shaoping; Hochman, Amit; Tselepidakis, Dimitrios
2015-06-09
This presentation provides an update on improvements in computational efficiency in a nonlinear multiscale battery model for computer aided engineering.
Fast multiscale Gaussian beam methods for wave equations in bounded convex domains
Bao, Gang; Department of Mathematics, Michigan State University, East Lansing, MI 48824 ; Lai, Jun; Qian, Jianliang
2014-03-15
Motivated by fast multiscale Gaussian wavepacket transforms and multiscale Gaussian beam methods which were originally designed for pure initial-value problems of wave equations, we develop fast multiscale Gaussian beam methods for initial boundary value problems of wave equations in bounded convex domains in the high frequency regime. To compute the wave propagation in bounded convex domains, we have to take into account reflecting multiscale Gaussian beams, which are accomplished by enforcing reflecting boundary conditions during beam propagation and carrying out suitable reflecting beam summation. To propagate multiscale beams efficiently, we prove that the ratio of the squared magnitude of beam amplitude and the beam width is roughly conserved, and accordingly we propose an effective indicator to identify significant beams. We also prove that the resulting multiscale Gaussian beam methods converge asymptotically. Numerical examples demonstrate the accuracy and efficiency of the method.
Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling
Du, Qiang
2014-11-12
The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next
Multi-scale Modeling of Plasticity in Tantalum.
Lim, Hojun; Battaile, Corbett Chandler.; Carroll, Jay; Buchheit, Thomas E.; Boyce, Brad; Weinberger, Christopher
2015-12-01
In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct
Multiscale Modeling of Hemodynamic Disorders" | Argonne Leadership
U.S. Department of Energy (DOE) all webpages (Extended Search)
Computing Facility Multiscale Modeling of Hemodynamic Disorders" Authors: Fedosov, D., Pivkin, I., Pan, W., Dao, M., Caswell, B., Karniadakis, G.E. This book offers a mathematical update of the state of the art of the research in the field of mathematical and numerical models of the circulatory system. It is structured into different chapters, written by outstanding experts in the field. Many fundamental issues are considered, such as: the mathematical representation of vascular
Multiscale Modeling of Malaria | Argonne Leadership Computing Facility
U.S. Department of Energy (DOE) all webpages (Extended Search)
Malaria Authors: Karniadakis, G.E., Parasitic infectious diseases like malaria and certain hereditary hematologic disorders are often associated with major changes in the shape and viscoelastic properties of red blood cells. Such changes can disrupt blood flow and, possibly, brain perfusion, as in the case of cerebral malaria. In recent work on stochastic multiscale models-in conjunction with large-scale parallel computing-we were able to quantify, for the first time, the main biophysical
Multiscale Modeling of Sickle Anemia Blood Blow by Dissipative Partice
U.S. Department of Energy (DOE) all webpages (Extended Search)
Dynamics | Argonne Leadership Computing Facility Sickle Anemia Blood Blow by Dissipative Partice Dynamics Authors: Huan, L., Caswell, B., Karniadakis, G A multi-scale model for sickle red blood cell is developed based on Dissipative Particle Dynamics (DPD). Different cell morphologies (sickle, granular, elongated shapes) typically observed in in vitro and in vivo are constructed and the deviations from the biconcave shape is quantified by the Asphericity and Elliptical shape factors. The
Kim, G.; Pesaran, A.; Smith, K.; Graf, P.; Jun, M.; Yang, C.; Li, G.; Li, S.; Hochman, A.; Tselepidakis, D.; White, J.
2014-06-01
This presentation discusses the significant enhancement of computational efficiency in nonlinear multiscale battery model for computer aided engineering in current research at NREL.
Multi-Scale Multi-Dimensional Model for Better Cell Design and Management (Presentation)
Kim, G.-H.; Smith, K.
2008-09-01
Describes NREL's R&D to develop a multi-scale model to assist in designing better, more reliable lithium-ion battery cells for advanced vehicles.
Multiscale modeling for fluid transport in nanosystems.
Lee, Jonathan W.; Jones, Reese E.; Mandadapu, Kranthi Kiran; Templeton, Jeremy Alan; Zimmerman, Jonathan A.
2013-09-01
Atomistic-scale behavior drives performance in many micro- and nano-fluidic systems, such as mircrofludic mixers and electrical energy storage devices. Bringing this information into the traditionally continuum models used for engineering analysis has proved challenging. This work describes one such approach to address this issue by developing atomistic-to-continuum multi scale and multi physics methods to enable molecular dynamics (MD) representations of atoms to incorporated into continuum simulations. Coupling is achieved by imposing constraints based on fluxes of conserved quantities between the two regions described by one of these models. The impact of electric fields and surface charges are also critical, hence, methodologies to extend finite-element (FE) MD electric field solvers have been derived to account for these effects. Finally, the continuum description can have inconsistencies with the coarse-grained MD dynamics, so FE equations based on MD statistics were derived to facilitate the multi scale coupling. Examples are shown relevant to nanofluidic systems, such as pore flow, Couette flow, and electric double layer.
Multi-Scale Multi-Dimensional Ion Battery Performance Model
Energy Science and Technology Software Center (OSTI)
2007-05-07
The Multi-Scale Multi-Dimensional (MSMD) Lithium Ion Battery Model allows for computer prediction and engineering optimization of thermal, electrical, and electrochemical performance of lithium ion cells with realistic geometries. The model introduces separate simulation domains for different scale physics, achieving much higher computational efficiency compared to the single domain approach. It solves a one dimensional electrochemistry model in a micro sub-grid system, and captures the impacts of macro-scale battery design factors on cell performance and materialmore » usage by solving cell-level electron and heat transports in a macro grid system.« less
Fluid simulations with atomistic resolution: a hybrid multiscale method with field-wise coupling
Borg, Matthew K. [Department of Mechanical and Aerospace Engineering, University of Strathclyde, Glasgow G1 1XJ (United Kingdom)] [Department of Mechanical and Aerospace Engineering, University of Strathclyde, Glasgow G1 1XJ (United Kingdom); Lockerby, Duncan A., E-mail: duncan.lockerby@warwick.ac.uk [School of Engineering, University of Warwick, Coventry CV4 7AL (United Kingdom); Reese, Jason M., E-mail: jason.reese@strath.ac.uk [Department of Mechanical and Aerospace Engineering, University of Strathclyde, Glasgow G1 1XJ (United Kingdom)
2013-12-15
We present a new hybrid method for simulating dense fluid systems that exhibit multiscale behaviour, in particular, systems in which a NavierStokes model may not be valid in parts of the computational domain. We apply molecular dynamics as a local microscopic refinement for correcting the NavierStokes constitutive approximation in the bulk of the domain, as well as providing a direct measurement of velocity slip at bounding surfaces. Our hybrid approach differs from existing techniques, such as the heterogeneous multiscale method (HMM), in some fundamental respects. In our method, the individual molecular solvers, which provide information to the macro model, are not coupled with the continuum grid at nodes (i.e. point-wise coupling), instead coupling occurs over distributed heterogeneous fields (here referred to as field-wise coupling). This affords two major advantages. Whereas point-wise coupled HMM is limited to regions of flow that are highly scale-separated in all spatial directions (i.e. where the state of non-equilibrium in the fluid can be adequately described by a single strain tensor and temperature gradient vector), our field-wise coupled HMM has no such limitations and so can be applied to flows with arbitrarily-varying degrees of scale separation (e.g. flow from a large reservoir into a nano-channel). The second major advantage is that the position of molecular elements does not need to be collocated with nodes of the continuum grid, which means that the resolution of the microscopic correction can be adjusted independently of the resolution of the continuum model. This in turn means the computational cost and accuracy of the molecular correction can be independently controlled and optimised. The macroscopic constraints on the individual molecular solvers are artificial body-force distributions, used in conjunction with standard periodicity. We test our hybrid method on the Poiseuille flow problem for both Newtonian (Lennard-Jones) and non
Gao, Kai; Chung, Eric T.; Gibson, Richard L.; Fu, Shubin; Efendiev, Yalchin
2015-06-05
The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elasticmore » wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.« less
Gao, Kai; Chung, Eric T.; Gibson, Richard L.; Fu, Shubin; Efendiev, Yalchin
2015-06-05
The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elastic wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.
Integrated Multiscale Modeling of Molecular Computing Devices
Weinan E
2012-03-29
The main bottleneck in modeling transport in molecular devices is to develop the correct formulation of the problem and efficient algorithms for analyzing the electronic structure and dynamics using, for example, the time-dependent density functional theory. We have divided this task into several steps. The first step is to developing the right mathematical formulation and numerical algorithms for analyzing the electronic structure using density functional theory. The second step is to study time-dependent density functional theory, particularly the far-field boundary conditions. The third step is to study electronic transport in molecular devices. We are now at the end of the first step. Under DOE support, we have made subtantial progress in developing linear scaling and sub-linear scaling algorithms for electronic structure analysis. Although there has been a huge amount of effort in the past on developing linear scaling algorithms, most of the algorithms developed suffer from the lack of robustness and controllable accuracy. We have made the following progress: (1) We have analyzed thoroughly the localization properties of the wave-functions. We have developed a clear understanding of the physical as well as mathematical origin of the decay properties. One important conclusion is that even for metals, one can choose wavefunctions that decay faster than any algebraic power. (2) We have developed algorithms that make use of these localization properties. Our algorithms are based on non-orthogonal formulations of the density functional theory. Our key contribution is to add a localization step into the algorithm. The addition of this localization step makes the algorithm quite robust and much more accurate. Moreover, we can control the accuracy of these algorithms by changing the numerical parameters. (3) We have considerably improved the Fermi operator expansion (FOE) approach. Through pole expansion, we have developed the optimal scaling FOE algorithm.
Multiscale Design of Advanced Materials based on Hybrid Ab Initio and Quasicontinuum Methods
Luskin, Mitchell
2014-03-12
This project united researchers from mathematics, chemistry, computer science, and engineering for the development of new multiscale methods for the design of materials. Our approach was highly interdisciplinary, but it had two unifying themes: first, we utilized modern mathematical ideas about change-of-scale and state-of-the-art numerical analysis to develop computational methods and codes to solve real multiscale problems of DOE interest; and, second, we took very seriously the need for quantum mechanics-based atomistic forces, and based our methods on fast solvers of chemically accurate methods.
A MULTISCALE, CELL-BASED FRAMEWORK FOR MODELING CANCER DEVELOPMENT
JIANG, YI
2007-01-16
Cancer remains to be one of the leading causes of death due to diseases. We use a systems approach that combines mathematical modeling, numerical simulation, in vivo and in vitro experiments, to develop a predictive model that medical researchers can use to study and treat cancerous tumors. The multiscale, cell-based model includes intracellular regulations, cellular level dynamics and intercellular interactions, and extracellular level chemical dynamics. The intracellular level protein regulations and signaling pathways are described by Boolean networks. The cellular level growth and division dynamics, cellular adhesion and interaction with the extracellular matrix is described by a lattice Monte Carlo model (the Cellular Potts Model). The extracellular dynamics of the signaling molecules and metabolites are described by a system of reaction-diffusion equations. All three levels of the model are integrated through a hybrid parallel scheme into a high-performance simulation tool. The simulation results reproduce experimental data in both avasular tumors and tumor angiogenesis. By combining the model with experimental data to construct biologically accurate simulations of tumors and their vascular systems, this model will enable medical researchers to gain a deeper understanding of the cellular and molecular interactions associated with cancer progression and treatment.
A Mathematical Analysis of Atomistic-to-Continuum (AtC) Multiscale Coupling Methods
Gunzburger, Max
2013-11-13
We have worked on several projects aimed at improving the efficiency and understanding of multiscale methods, especially those applicable to problems involving atomistic-to-continuum coupling. Activities include blending methods for AtC coupling and efficient quasi-continuum methods for problems with long-range interactions.
Watt-Sun: A Multi-Scale, Multi-Model, Machine-Learning Solar Forecasting
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Technology | Department of Energy Watt-Sun: A Multi-Scale, Multi-Model, Machine-Learning Solar Forecasting Technology Watt-Sun: A Multi-Scale, Multi-Model, Machine-Learning Solar Forecasting Technology IBM logo.png As part of this project, new solar forecasting technology will be developed that leverages big data processing, deep machine learning, and cloud modeling integrated in a universal platform with an open architecture. Similar to the Watson computer system, this proposed technology
SU-F-18C-15: Model-Based Multiscale Noise Reduction On Low Dose Cone Beam Projection
Yao, W; Farr, J
2014-06-15
Purpose: To improve image quality of low dose cone beam CT for patient positioning in radiation therapy. Methods: In low dose cone beam CT (CBCT) imaging systems, Poisson process governs the randomness of photon fluence at x-ray source and the detector because of the independent binomial process of photon absorption in medium. On a CBCT projection, the variance of fluence consists of the variance of noiseless imaging structure and that of Poisson noise, which is proportional to the mean (noiseless) of the fluence at the detector. This requires multiscale filters to smoothen noise while keeping the structure information of the imaged object. We used a mathematical model of Poisson process to design multiscale filters and established the balance of noise correction and structure blurring. The algorithm was checked with low dose kilo-voltage CBCT projections acquired from a Varian OBI system. Results: From the investigation of low dose CBCT of a Catphan phantom and patients, it showed that our model-based multiscale technique could efficiently reduce noise and meanwhile keep the fine structure of the imaged object. After the image processing, the number of visible line pairs in Catphan phantom scanned with 4 ms pulse time was similar to that scanned with 32 ms, and soft tissue structure from simulated 4 ms patient head-and-neck images was also comparable with scanned 20 ms ones. Compared with fixed-scale technique, the image quality from multiscale one was improved. Conclusion: Use of projection-specific multiscale filters can reach better balance on noise reduction and structure information loss. The image quality of low dose CBCT can be improved by using multiscale filters.
Multiscale Modeling of the Orthotropic Behaviour of PA6-6 overmoulded Composites using MMI Approach
Bikard, Jerome; Robert, Gilles; Moulinjeune, Olivier [RHODIA ENGINEERING PLASTICS, Technyl Application Center Avenue Ramboz, BP 64, 69192 Saint FONS CEDEX (France)
2011-05-04
In this study the MMI ConfidentDesign multiscale approach (consisting in a non-linear multiscale simulation based on DIGIMAT registered including the injection modeling of the filled polymer and a multiscale mechanical model using the fiber orientation tensor resulting from the injection) has been combined with an orthotropic damageable elastic simulation. The anisotropic properties (including rupture criterion) are estimated and a multiscale simulation including the heterogeneous material properties issued from injection process is done. The impact of fiber ratios is then investigated. The structural simulation predicts stresses localized close to the punch, as well in injected PA66 than in composite part. Greater the fiber volume ratio, greater the modulus and more brittle the composite.
Components for Atomistic-to-Continuum Multiscale Modeling of Flow in Micro- and Nanofluidic Systems
Adalsteinsson, Helgi; Debusschere, Bert J.; Long, Kevin R.; Najm, Habib N.
2008-01-01
Micro- and nanofluidics pose a series of significant challenges for science-based modeling. Key among those are the wide separation of length- and timescales between interface phenomena and bulk flow and the spatially heterogeneous solution properties near solid-liquid interfaces. It is not uncommon for characteristic scales in these systems to span nine orders of magnitude from the atomic motions in particle dynamics up to evolution of mass transport at the macroscale level, making explicit particle models intractable for all but the simplest systems. Recently, atomistic-to-continuum (A2C) multiscale simulations have gained a lot of interest as an approach to rigorously handle particle-levelmore » dynamics while also tracking evolution of large-scale macroscale behavior. While these methods are clearly not applicable to all classes of simulations, they are finding traction in systems in which tight-binding, and physically important, dynamics at system interfaces have complex effects on the slower-evolving large-scale evolution of the surrounding medium. These conditions allow decomposition of the simulation into discrete domains, either spatially or temporally. In this paper, we describe how features of domain decomposed simulation systems can be harnessed to yield flexible and efficient software for multiscale simulations of electric field-driven micro- and nanofluidics.« less
Bayesian data assimilation for stochastic multiscale models of transport in porous media.
Marzouk, Youssef M.; van Bloemen Waanders, Bart Gustaaf; Parno, Matthew; Ray, Jaideep; Lefantzi, Sophia; Salazar, Luke; McKenna, Sean Andrew; Klise, Katherine A.
2011-10-01
We investigate Bayesian techniques that can be used to reconstruct field variables from partial observations. In particular, we target fields that exhibit spatial structures with a large spectrum of lengthscales. Contemporary methods typically describe the field on a grid and estimate structures which can be resolved by it. In contrast, we address the reconstruction of grid-resolved structures as well as estimation of statistical summaries of subgrid structures, which are smaller than the grid resolution. We perform this in two different ways (a) via a physical (phenomenological), parameterized subgrid model that summarizes the impact of the unresolved scales at the coarse level and (b) via multiscale finite elements, where specially designed prolongation and restriction operators establish the interscale link between the same problem defined on a coarse and fine mesh. The estimation problem is posed as a Bayesian inverse problem. Dimensionality reduction is performed by projecting the field to be inferred on a suitable orthogonal basis set, viz. the Karhunen-Loeve expansion of a multiGaussian. We first demonstrate our techniques on the reconstruction of a binary medium consisting of a matrix with embedded inclusions, which are too small to be grid-resolved. The reconstruction is performed using an adaptive Markov chain Monte Carlo method. We find that the posterior distributions of the inferred parameters are approximately Gaussian. We exploit this finding to reconstruct a permeability field with long, but narrow embedded fractures (which are too fine to be grid-resolved) using scalable ensemble Kalman filters; this also allows us to address larger grids. Ensemble Kalman filtering is then used to estimate the values of hydraulic conductivity and specific yield in a model of the High Plains Aquifer in Kansas. Strong conditioning of the spatial structure of the parameters and the non-linear aspects of the water table aquifer create difficulty for the ensemble Kalman
Sondak, David; Oberai, Assad A.
2012-10-15
Novel large eddy simulation (LES) models are developed for incompressible magnetohydrodynamics (MHD). These models include the application of the variational multiscale formulation of LES to the equations of incompressible MHD. Additionally, a new residual-based eddy viscosity model is introduced for MHD. A mixed LES model that combines the strengths of both of these models is also derived. The new models result in a consistent numerical method that is relatively simple to implement. The need for a dynamic procedure in determining model coefficients is no longer required. The new LES models are tested on a decaying Taylor-Green vortex generalized to MHD and benchmarked against classical LES turbulence models. The LES simulations are run in a periodic box of size [-{pi}, {pi}]{sup 3} with 32 modes in each direction and are compared to a direct numerical simulation (DNS) with 512 modes in each direction. The new models are able to account for the essential MHD physics which is demonstrated via comparisons of energy spectra. We also compare the performance of our models to a DNS simulation by Pouquet et al.['The dynamics of unforced turbulence at high Reynolds number for Taylor-Green vortices generalized to MHD,' Geophys. Astrophys. Fluid Dyn. 104, 115-134 (2010)], for which the ratio of DNS modes to LES modes is 262:144.
A multiscale method for the analysis of defect behavior in MO during electron irradiation
Rest, J.; Insepov, Z.; Ye, B.; Yun, D.
2014-10-01
In order to overcome a lack of experimental information on values for key materials properties and kinetic coefficients, a multiscale modeling approach is applied to defect behavior in irradiated Mo where key materials properties, such as point defect (vacancy and interstitial) migration enthalpies as well as kinetic factors such as dimer formation, defect recombination, and self interstitial–interstitial loop interaction coefficients, are obtained by molecular dynamics calculations and implemented into rate-theory simulations of defect behavior. The multiscale methodology is validated against interstitial loop growth data obtained from electron irradiation of pure Mo. It is shown that the observed linear behavior of the loop diameter vs. the square root of irradiation time is a direct consequence of the 1D migration of self-interstitial atoms.
Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling
Gunzburger, Max
2015-02-17
We have treated the modeling, analysis, numerical analysis, and algorithmic development for nonlocal models of diffusion and mechanics. Variational formulations were developed and finite element methods were developed based on those formulations for both steady state and time dependent problems. Obstacle problems and optimization problems for the nonlocal models were also treated and connections made with fractional derivative models.
A second gradient theoretical framework for hierarchical multiscale modeling of materials
Luscher, Darby J; Bronkhorst, Curt A; Mc Dowell, David L
2009-01-01
A theoretical framework for the hierarchical multiscale modeling of inelastic response of heterogeneous materials has been presented. Within this multiscale framework, the second gradient is used as a non local kinematic link between the response of a material point at the coarse scale and the response of a neighborhood of material points at the fine scale. Kinematic consistency between these scales results in specific requirements for constraints on the fluctuation field. The wryness tensor serves as a second-order measure of strain. The nature of the second-order strain induces anti-symmetry in the first order stress at the coarse scale. The multiscale ISV constitutive theory is couched in the coarse scale intermediate configuration, from which an important new concept in scale transitions emerges, namely scale invariance of dissipation. Finally, a strategy for developing meaningful kinematic ISVs and the proper free energy functions and evolution kinetics is presented.
A voxel-based multiscale model to simulate the radiation response of hypoxic tumors
Espinoza, I.; Peschke, P.; Karger, C. P.
2015-01-15
Purpose: In radiotherapy, it is important to predict the response of tumors to irradiation prior to the treatment. This is especially important for hypoxic tumors, which are known to be highly radioresistant. Mathematical modeling based on the dose distribution, biological parameters, and medical images may help to improve this prediction and to optimize the treatment plan. Methods: A voxel-based multiscale tumor response model for simulating the radiation response of hypoxic tumors was developed. It considers viable and dead tumor cells, capillary and normal cells, as well as the most relevant biological processes such as (i) proliferation of tumor cells, (ii) hypoxia-induced angiogenesis, (iii) spatial exchange of cells leading to tumor growth, (iv) oxygen-dependent cell survival after irradiation, (v) resorption of dead cells, and (vi) spatial exchange of cells leading to tumor shrinkage. Oxygenation is described on a microscopic scale using a previously published tumor oxygenation model, which calculates the oxygen distribution for each voxel using the vascular fraction as the most important input parameter. To demonstrate the capabilities of the model, the dependence of the oxygen distribution on tumor growth and radiation-induced shrinkage is investigated. In addition, the impact of three different reoxygenation processes is compared and tumor control probability (TCP) curves for a squamous cells carcinoma of the head and neck (HNSSC) are simulated under normoxic and hypoxic conditions. Results: The model describes the spatiotemporal behavior of the tumor on three different scales: (i) on the macroscopic scale, it describes tumor growth and shrinkage during radiation treatment, (ii) on a mesoscopic scale, it provides the cell density and vascular fraction for each voxel, and (iii) on the microscopic scale, the oxygen distribution may be obtained in terms of oxygen histograms. With increasing tumor size, the simulated tumors develop a hypoxic core. Within the
Collaborating for Multi-Scale Chemical Science
William H. Green
2006-07-14
Advanced model reduction methods were developed and integrated into the CMCS multiscale chemical science simulation software. The new technologies were used to simulate HCCI engines and burner flames with exceptional fidelity.
Frequency-domain multiscale quantum mechanics/electromagnetics simulation method
Meng, Lingyi; Yin, Zhenyu; Yam, ChiYung E-mail: ghc@everest.hku.hk; Koo, SiuKong; Chen, GuanHua E-mail: ghc@everest.hku.hk; Chen, Quan; Wong, Ngai
2013-12-28
A frequency-domain quantum mechanics and electromagnetics (QM/EM) method is developed. Compared with the time-domain QM/EM method [Meng et al., J. Chem. Theory Comput. 8, 11901199 (2012)], the newly developed frequency-domain QM/EM method could effectively capture the dynamic properties of electronic devices over a broader range of operating frequencies. The system is divided into QM and EM regions and solved in a self-consistent manner via updating the boundary conditions at the QM and EM interface. The calculated potential distributions and current densities at the interface are taken as the boundary conditions for the QM and EM calculations, respectively, which facilitate the information exchange between the QM and EM calculations and ensure that the potential, charge, and current distributions are continuous across the QM/EM interface. Via Fourier transformation, the dynamic admittance calculated from the time-domain and frequency-domain QM/EM methods is compared for a carbon nanotube based molecular device.
A Multiscale Modeling Approach to Analyze Filament-Wound Composite Pressure Vessels
Nguyen, Ba Nghiep; Simmons, Kevin L.
2013-07-22
A multiscale modeling approach to analyze filament-wound composite pressure vessels is developed in this article. The approach, which extends the Nguyen et al. model [J. Comp. Mater. 43 (2009) 217] developed for discontinuous fiber composites to continuous fiber ones, spans three modeling scales. The microscale considers the unidirectional elastic fibers embedded in an elastic-plastic matrix obeying the Ramberg-Osgood relation and J2 deformation theory of plasticity. The mesoscale behavior representing the composite lamina is obtained through an incremental Mori-Tanaka type model and the Eshelby equivalent inclusion method [Proc. Roy. Soc. Lond. A241 (1957) 376]. The implementation of the micro-meso constitutive relations in the ABAQUS finite element package (via user subroutines) allows the analysis of a filament-wound composite pressure vessel (macroscale) to be performed. Failure of the composite lamina is predicted by a criterion that accounts for the strengths of the fibers and of the matrix as well as of their interface. The developed approach is demonstrated in the analysis of a filament-wound pressure vessel to study the effect of the lamina thickness on the burst pressure. The predictions are favorably compared to the numerical and experimental results by Lifshitz and Dayan [Comp. Struct. 32 (1995) 313].
Horstemeyer, Mark R.; Chaudhuri, Santanu
2015-09-30
A multiscale modeling Internal State Variable (ISV) constitutive model was developed that captures the fundamental structure-property relationships. The macroscale ISV model used lower length scale simulations (Butler-Volmer and Electronics Structures results) in order to inform the ISVs at the macroscale. The chemomechanical ISV model was calibrated and validated from experiments with magnesium (Mg) alloys that were investigated under corrosive environments coupled with experimental electrochemical studies. Because the ISV chemomechanical model is physically based, it can be used for other material systems to predict corrosion behavior. As such, others can use the chemomechanical model for analyzing corrosion effects on their designs.
Multiscale Modeling of the Deformation of Advanced Ferritic Steels for Generation IV Nuclear Energy
Nasr M. Ghoniem; Nick Kioussis
2009-04-18
The objective of this project is to use the multi-scale modeling of materials (MMM) approach to develop an improved understanding of the effects of neutron irradiation on the mechanical properties of high-temperature structural materials that are being developed or proposed for Gen IV applications. In particular, the research focuses on advanced ferritic/ martensitic steels to enable operation up to 650-700°C, compared to the current 550°C limit on high-temperature steels.
Sondak, D.; Shadid, J. N.; Oberai, A. A.; Pawlowski, R. P.; Cyr, E. C.; Smith, T. M.
2015-04-29
New large eddy simulation (LES) turbulence models for incompressible magnetohydrodynamics (MHD) derived from the variational multiscale (VMS) formulation for finite element simulations are introduced. The new models include the variational multiscale formulation, a residual-based eddy viscosity model, and a mixed model that combines both of these component models. Each model contains terms that are proportional to the residual of the incompressible MHD equations and is therefore numerically consistent. Moreover, each model is also dynamic, in that its effect vanishes when this residual is small. The new models are tested on the decaying MHD Taylor Green vortex at low and high Reynolds numbers. The evaluation of the models is based on comparisons with available data from direct numerical simulations (DNS) of the time evolution of energies as well as energy spectra at various discrete times. Thus a numerical study, on a sequence of meshes, is presented that demonstrates that the large eddy simulation approaches the DNS solution for these quantities with spatial mesh refinement.
Sondak, D.; Shadid, J. N.; Oberai, A. A.; Pawlowski, R. P.; Cyr, E. C.; Smith, T. M.
2015-04-29
New large eddy simulation (LES) turbulence models for incompressible magnetohydrodynamics (MHD) derived from the variational multiscale (VMS) formulation for finite element simulations are introduced. The new models include the variational multiscale formulation, a residual-based eddy viscosity model, and a mixed model that combines both of these component models. Each model contains terms that are proportional to the residual of the incompressible MHD equations and is therefore numerically consistent. Moreover, each model is also dynamic, in that its effect vanishes when this residual is small. The new models are tested on the decaying MHD Taylor Green vortex at low and highmore » Reynolds numbers. The evaluation of the models is based on comparisons with available data from direct numerical simulations (DNS) of the time evolution of energies as well as energy spectra at various discrete times. Thus a numerical study, on a sequence of meshes, is presented that demonstrates that the large eddy simulation approaches the DNS solution for these quantities with spatial mesh refinement.« less
Carbon Capture Simulation Initiative: A Case Study in Multi-Scale Modeling and New Challenges
Miller, David; Syamlal, Madhava; Mebane, David; Storlie, Curtis; Bhattacharyya, Debangsu; Sahinidis, Nikolaos V.; Agarwal, Deborah A.; Tong, Charles; Zitney, Stephen E.; Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran; Ryan, Emily M.; Engel, David W.; Dale, Crystal
2014-04-01
Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative is a partnership among national laboratories, industry and universities that is developing and deploying a suite of multi-scale modeling and simulation tools including basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework, high-resolution filtered computational-fluid-dynamic (CFD) submodels, validated high-fidelity device-scale CFD models with quantified uncertainty, and a risk analysis framework. These tools and models enable basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to synthesize and optimize a process. The resulting process informs the development of process control systems and more detailed simulations of potential equipment to improve the design and reduce scale-up risk. Quantification and propagation of uncertainty across scales is an essential part of these tools and models.
Carbon Capture Simulation Initiative: A Case Study in Multi-Scale Modeling and New Challenges
Miller, David C; Syamlal, Madhava; Zitney, Stephen E.
2014-06-07
Abstract: Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative is a partnership among national laboratories, industry and universities that is developing and deploying a suite of multi-scale modeling and simulation tools including basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework, high-resolution filtered computational-fluid-dynamic (CFD) submodels, validated high-fidelity device-scale CFD models with quantified uncertainty, and a risk analysis framework. These tools and models enable basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to synthesize and optimize a process. The resulting process informs the development of process control systems and more detailed simulations of potential equipment to improve the design and reduce scale-up risk. Quantification and propagation of uncertainty across scales is an essential part of these tools and models.
Sustainable Manufacturing via Multi-Scale, Physics-Based Process...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Sustainable Manufacturing via Multi-Scale, Physics-Based Process Modeling and ... design framework enabled by multi-scale, physics-based process models. ...
Multiscale modeling of blood-plasma separation in bifurcations | Argonne
U.S. Department of Energy (DOE) all webpages (Extended Search)
Leadership Computing Facility modeling of blood-plasma separation in bifurcations Authors: Li, X. J., A. S. Popel, G. E. Karniadakis Motion of a suspension of red blood cells (RBCs) flowing in a Y-shaped bifurcating microfluidic channel is investigated using a low-dimensional RBC validated 3D model based on dissipative particle dynamics. No-slip wall boundary and adaptive boundary conditions were implemented to model hydrodynamic flow within a specific wall structure of diverging
Assessment of Multi-Scale T/H Codes and Models for DNB CP
U.S. Department of Energy (DOE) all webpages (Extended Search)
Assessment of Multi-Scale Thermal-Hydraulic Codes and Models for DNB Challenge Problem Applications L3.AMA.CP.P8.01 Yixing Sung, Jin Yan, Liping Cao, Vefa N. Kucukboyaci, Emre Tatli Westinghouse Electric Company LLC Mark A. Christon, Jozsef Bakosi Los Alamos National Laboratories Robert K. Salko Oak Ridge National Laboratories Hongbin Zhang Idaho National Laboratory March 31, 2014 CASL-U-2014-0032-000 L3.AMA.CP.P8.01 ii CASL-U-2014-0032-000 REVISION LOG Revision Date Affected Pages Revision
Safer Batteries through Coupled Multiscale Modeling (ICCS 2015)
Turner, John A; Allu, Srikanth; Berrill, Mark A; Elwasif, Wael R; Kalnaus, Sergiy; Kumar, Abhishek; Lebrun-Grandie, Damien T; Pannala, Dr. Sreekanth; Simunovic, Srdjan
2015-01-01
Batteries are highly complex electrochemical systems, with performance and safety governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. We describe a new, open source computational environment for battery simulation known as VIBE - the Virtual Integrated Battery Environment. VIBE includes homogenized and pseudo-2D electrochemistry models such as those by Newman-Tiedemann-Gu (NTG) and Doyle- Fuller-Newman (DFN, a.k.a. DualFoil) as well as a new advanced capability known as AMPERES (Advanced MultiPhysics for Electrochemical and Renewable Energy Storage). AMPERES provides a 3D model for electrochemistry and full coupling with 3D electrical and thermal models on the same grid. VIBE/AMPERES has been used to create three-dimensional battery cell and pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical response under adverse conditions.
Predictive Maturity of Multi-Scale Simulation Models for Fuel Performance
Atamturktur, Sez; Unal, Cetin; Hemez, Francois; Williams, Brian; Tome, Carlos
2015-03-16
The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy’s resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed framework is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this
Wang, Yuan; Wang, Minghuai; Zhang, Renyi; Ghan, Steven J.; Lin, Yun; Hu, Jiaxi; Pan, Bowen; Levy, Misti; Jiang, Jonathan; Molina, Mario J.
2014-05-13
Atmospheric aerosols impact weather and global general circulation by modifying cloud and precipitation processes, but the magnitude of cloud adjustment by aerosols remains poorly quantified and represents the largest uncertainty in estimated forcing of climate change. Here we assess the impacts of anthropogenic aerosols on the Pacific storm track using a multi-scale global aerosol-climate model (GCM). Simulations of two aerosol scenarios corresponding to the present day and pre-industrial conditions reveal long-range transport of anthropogenic aerosols across the north Pacific and large resulting changes in the aerosol optical depth, cloud droplet number concentration, and cloud and ice water paths. Shortwave and longwave cloud radiative forcing at the top of atmosphere are changed by - 2.5 and + 1.3 W m-2, respectively, by emission changes from pre-industrial to present day, and an increased cloud-top height indicates invigorated mid-latitude cyclones. The overall increased precipitation and poleward heat transport reflect intensification of the Pacific storm track by anthropogenic aerosols. Hence, this work provides for the first time a global perspective of the impacts of Asian pollution outflows from GCMs. Furthermore, our results suggest that the multi-scale modeling framework is essential in producing the aerosol invigoration effect of deep convective clouds on the global scale.
Multiscale model of metal alloy oxidation at grain boundaries
Sushko, Maria L. Alexandrov, Vitaly; Schreiber, Daniel K.; Rosso, Kevin M.; Bruemmer, Stephen M.
2015-06-07
High temperature intergranular oxidation and corrosion of metal alloys is one of the primary causes of materials degradation in nuclear systems. In order to gain insights into grain boundary oxidation processes, a mesoscale metal alloy oxidation model is established by combining quantum Density Functional Theory (DFT) and mesoscopic Poisson-Nernst-Planck/classical DFT with predictions focused on Ni alloyed with either Cr or Al. Analysis of species and fluxes at steady-state conditions indicates that the oxidation process involves vacancy-mediated transport of Ni and the minor alloying element to the oxidation front and the formation of stable metal oxides. The simulations further demonstrate that the mechanism of oxidation for Ni-5Cr and Ni-4Al is qualitatively different. Intergranular oxidation of Ni-5Cr involves the selective oxidation of the minor element and not matrix Ni, due to slower diffusion of Ni relative to Cr in the alloy and due to the significantly smaller energy gain upon the formation of nickel oxide compared to that of Cr{sub 2}O{sub 3}. This essentially one-component oxidation process results in continuous oxide formation and a monotonic Cr vacancy distribution ahead of the oxidation front, peaking at alloy/oxide interface. In contrast, Ni and Al are both oxidized in Ni-4Al forming a mixed spinel NiAl{sub 2}O{sub 4}. Different diffusivities of Ni and Al give rise to a complex elemental distribution in the vicinity of the oxidation front. Slower diffusing Ni accumulates in the oxide and metal within 3 nm of the interface, while Al penetrates deeper into the oxide phase. Ni and Al are both depleted from the region 3–10 nm ahead of the oxidation front creating voids. The oxide microstructure is also different. Cr{sub 2}O{sub 3} has a plate-like structure with 1.2–1.7 nm wide pores running along the grain boundary, while NiAl{sub 2}O{sub 4} has 1.5 nm wide pores in the direction parallel to the grain boundary and 0.6 nm pores in the perpendicular
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain
2015-05-27
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues for investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain
2015-05-27
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less
Multi-scale modeling of inter-granular fracture in UO2
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.; Biner, S. Bulent
2015-03-01
A hierarchical multi-scale approach is pursued in this work to investigate the influence of porosity, pore and grain size on the intergranular brittle fracture in UO2. In this approach, molecular dynamics simulations are performed to obtain the fracture properties for different grain boundary types. A phase-field model is then utilized to perform intergranular fracture simulations of representative microstructures with different porosities, pore and grain sizes. In these simulations the grain boundary fracture properties obtained from molecular dynamics simulations are used. The responses from the phase-field fracture simulations are then fitted with a stress-based brittle fracture model usable at the engineering scale. This approach encapsulates three different length and time scales, and allows the development of microstructurally informed engineering scale model from properties evaluated at the atomistic scale.
T.A> Buscheck; Y. Sun; Y. Hao
2006-03-28
The MultiScale ThermoHydrologic Model (MSTHM) predicts thermal-hydrologic (TH) conditions within emplacement tunnels (drifts) and in the adjoining host rock at Yucca Mountain, Nevada, which is the proposed site for a radioactive waste repository in the US. Because these predictions are used in the performance assessment of the Yucca Mountain repository, they must address the influence of variability and uncertainty of the engineered- and natural-system parameters that significantly influence those predictions. Parameter-sensitivity studies show that the MSTHM predictions adequately propagate the influence of parametric variability and uncertainty. Model-validation studies show that the influence of conceptual-model uncertainty on the MSTHM predictions is insignificant compared to that of parametric uncertainty, which is propagated through the MSTHM.
Multiscale Multiphysics Lithium-Ion Battery Model with Multidomain Modular Framework
Kim, G. H.
2013-01-01
Lithium-ion batteries (LIBs) powering recent wave of personal ubiquitous electronics are also believed to be a key enabler of electrification of vehicle powertrain on the path toward sustainable transportation future. Over the past several years, National Renewable Energy Laboratory (NREL) has developed the Multi-Scale Multi-Domain (MSMD) model framework, which is an expandable platform and a generic modularized flexible framework resolving interactions among multiple physics occurring in varied length and time scales in LIB[1]. NREL has continued to enhance the functionality of the framework and to develop constituent models in the context of the MSMD framework responding to U.S. Department of Energy's CAEBAT program objectives. This talk will introduce recent advancements in NREL's LIB modeling research in regards of scale-bridging, multi-physics integration, and numerical scheme developments.
Uncertainty Quantification and Management for Multi-scale Nuclear Materials Modeling
McDowell, David; Deo, Chaitanya; Zhu, Ting; Wang, Yan
2015-10-21
Understanding and improving microstructural mechanical stability in metals and alloys is central to the development of high strength and high ductility materials for cladding and cores structures in advanced fast reactors. Design and enhancement of radiation-induced damage tolerant alloys are facilitated by better understanding the connection of various unit processes to collective responses in a multiscale model chain, including: dislocation nucleation, absorption and desorption at interfaces; vacancy production, radiation-induced segregation of Cr and Ni at defect clusters (point defect sinks) in BCC Fe-Cr ferritic/martensitic steels; investigation of interaction of interstitials and vacancies with impurities (V, Nb, Ta, Mo, W, Al, Si, P, S); time evolution of swelling (cluster growth) phenomena of irradiated materials; and energetics and kinetics of dislocation bypass of defects formed by interstitial clustering and formation of prismatic loops, informing statistical models of continuum character with regard to processes of dislocation glide, vacancy agglomeration and swelling, climb and cross slip.
Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.; Lee, K. J.
2012-05-01
This 2012 Annual Merit Review presentation gives an overview of the Computer-Aided Engineering of Batteries (CAEBAT) project and introduces the Multi-Scale, Multi-Dimensional model for modeling lithium-ion batteries for electric vehicles.
Anh Bui; Nam Dinh; Brian Williams
2013-09-01
In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Such sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this works calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the CIPS Validation Data Plan at the Consortium for Advanced Simulation of LWRs to enable
Luo, Jian; Tomar, Vikas; Zhou, Naixie; Lee, Hongsuk
2013-06-30
Based on a recent discovery of premelting-like grain boundary segregation in refractory metals occurring at high temperatures and/or high alloying levels, this project investigated grain boundary segregation and embrittlement in tungsten (W) based alloys. Specifically, new interfacial thermodynamic models have been developed and quantified to predict high-temperature grain boundary segregation in the W-Ni binary alloy and W-Ni-Fe, W-Ni-Ti, W-Ni-Co, W-Ni-Cr, W-Ni-Zr and W-Ni-Nb ternary alloys. The thermodynamic modeling results have been experimentally validated for selected systems. Furthermore, multiscale modeling has been conducted at continuum, atomistic and quantum-mechanical levels to link grain boundary segregation with embrittlement. In summary, this 3-year project has successfully developed a theoretical framework in combination with a multiscale modeling strategy for predicting grain boundary segregation and embrittlement in W based alloys.
Discharge Performance of Li-O_{2} Batteries Using a Multiscale Modeling Approach
Bao, Jie; Xu, Wu; Bhattacharya, Priyanka; Stewart, Mark L.; Zhang, Jiguang; Pan, Wenxiao
2015-06-10
To study the discharge performance of Li–O_{2} batteries, we propose a multiscale modeling framework that links models in an upscaling fashion from the nanoscale to mesoscale and finally to the device scale. We have effectively reconstructed the microstructure of a Li–O_{2} air electrode in silico, conserving the porosity, surface-to-volume ratio, and pore size distribution of the real air electrode structure. The mechanism of rate-dependent morphology of Li_{2}O_{2} growth is incorporated into the mesoscale model. The correlation between the active-surface-to-volume ratio and averaged Li_{2}O_{2} concentration is derived to link different scales. The proposed approach’s accuracy is first demonstrated by comparing the predicted discharge curves of Li–O_{2} batteries with experimental results at the high current density. Next, the validated modeling approach effectively captures the significant improvement in discharge capacity due to the formation of Li_{2}O_{2} particles. Finally, it predicts the discharge capacities of Li–O_{2} batteries with different air electrode microstructure designs and operating conditions.
Investigating ice nucleation in cirrus clouds with an aerosol-enabled Multiscale Modeling Framework
Zhang, Chengzhu; Wang, Minghuai; Morrison, H.; Somerville, Richard C.; Zhang, Kai; Liu, Xiaohong; Li, J-L F.
2014-11-06
In this study, an aerosol-dependent ice nucleation scheme [Liu and Penner, 2005] has been implemented in an aerosol-enabled multi-scale modeling framework (PNNL MMF) to study ice formation in upper troposphere cirrus clouds through both homogeneous and heterogeneous nucleation. The MMF model represents cloud scale processes by embedding a cloud-resolving model (CRM) within each vertical column of a GCM grid. By explicitly linking ice nucleation to aerosol number concentration, CRM-scale temperature, relative humidity and vertical velocity, the new MMF model simulates the persistent high ice supersaturation and low ice number concentration (10 to 100/L) at cirrus temperatures. The low ice numbermore » is attributed to the dominance of heterogeneous nucleation in ice formation. The new model simulates the observed shift of the ice supersaturation PDF towards higher values at low temperatures following homogeneous nucleation threshold. The MMF models predict a higher frequency of midlatitude supersaturation in the Southern hemisphere and winter hemisphere, which is consistent with previous satellite and in-situ observations. It is shown that compared to a conventional GCM, the MMF is a more powerful model to emulate parameters that evolve over short time scales such as supersaturation. Sensitivity tests suggest that the simulated global distribution of ice clouds is sensitive to the ice nucleation schemes and the distribution of sulfate and dust aerosols. Simulations are also performed to test empirical parameters related to auto-conversion of ice crystals to snow. Results show that with a value of 250 μm for the critical diameter, Dcs, that distinguishes ice crystals from snow, the model can produce good agreement to the satellite retrieved products in terms of cloud ice water path and ice water content, while the total ice water is not sensitive to the specification of Dcs value.« less
Investigating ice nucleation in cirrus clouds with an aerosol-enabled Multiscale Modeling Framework
Zhang, Chengzhu; Wang, Minghuai; Morrison, H.; Somerville, Richard C.; Zhang, Kai; Liu, Xiaohong; Li, J-L F.
2014-12-01
In this study, an aerosol-dependent ice nucleation scheme [Liu and Penner, 2005] has been implemented in an aerosol-enabled multi-scale modeling framework (PNNL MMF) to study ice formation in upper troposphere cirrus clouds through both homogeneous and heterogeneous nucleation. The MMF model represents cloud scale processes by embedding a cloud-resolving model (CRM) within each vertical column of a GCM grid. By explicitly linking ice nucleation to aerosol number concentration, CRM-scale temperature, relative humidity and vertical velocity, the new MMF model simulates the persistent high ice supersaturation and low ice number concentration (10 to 100/L) at cirrus temperatures. The low ice number is attributed to the dominance of heterogeneous nucleation in ice formation. The new model simulates the observed shift of the ice supersaturation PDF towards higher values at low temperatures following homogeneous nucleation threshold. The MMF models predict a higher frequency of midlatitude supersaturation in the Southern hemisphere and winter hemisphere, which is consistent with previous satellite and in-situ observations. It is shown that compared to a conventional GCM, the MMF is a more powerful model to emulate parameters that evolve over short time scales such as supersaturation. Sensitivity tests suggest that the simulated global distribution of ice clouds is sensitive to the ice nucleation schemes and the distribution of sulfate and dust aerosols. Simulations are also performed to test empirical parameters related to auto-conversion of ice crystals to snow. Results show that with a value of 250 ?m for the critical diameter, Dcs, that distinguishes ice crystals from snow, the model can produce good agreement to the satellite retrieved products in terms of cloud ice water path and ice water content, while the total ice water is not sensitive to the specification of Dcs value.
Investigating ice nucleation in cirrus clouds with an aerosol-enabled Multiscale Modeling Framework
Zhang, Chengzhu; Wang, Minghuai; Morrison, H.; Somerville, Richard C.; Zhang, Kai; Liu, Xiaohong; Li, J-L F.
2014-11-06
In this study, an aerosol-dependent ice nucleation scheme [Liu and Penner, 2005] has been implemented in an aerosol-enabled multi-scale modeling framework (PNNL MMF) to study ice formation in upper troposphere cirrus clouds through both homogeneous and heterogeneous nucleation. The MMF model represents cloud scale processes by embedding a cloud-resolving model (CRM) within each vertical column of a GCM grid. By explicitly linking ice nucleation to aerosol number concentration, CRM-scale temperature, relative humidity and vertical velocity, the new MMF model simulates the persistent high ice supersaturation and low ice number concentration (10 to 100/L) at cirrus temperatures. The low ice number is attributed to the dominance of heterogeneous nucleation in ice formation. The new model simulates the observed shift of the ice supersaturation PDF towards higher values at low temperatures following homogeneous nucleation threshold. The MMF models predict a higher frequency of midlatitude supersaturation in the Southern hemisphere and winter hemisphere, which is consistent with previous satellite and in-situ observations. It is shown that compared to a conventional GCM, the MMF is a more powerful model to emulate parameters that evolve over short time scales such as supersaturation. Sensitivity tests suggest that the simulated global distribution of ice clouds is sensitive to the ice nucleation schemes and the distribution of sulfate and dust aerosols. Simulations are also performed to test empirical parameters related to auto-conversion of ice crystals to snow. Results show that with a value of 250 ?m for the critical diameter, Dcs, that distinguishes ice crystals from snow, the model can produce good agreement to the satellite retrieved products in terms of cloud ice water path and ice water content, while the total ice water is not sensitive to the specification of Dcs value.
Toward Multi-scale Modeling and simulation of conduction in heterogeneous materials.
Lechman, Jeremy B.; Battaile, Corbett Chandler.; Bolintineanu, Dan; Cooper, Marcia A.; Erikson, William W.; Foiles, Stephen M.; Kay, Jeffrey J; Phinney, Leslie M.; Piekos, Edward S.; Specht, Paul Elliott; Wixom, Ryan R.; Yarrington, Cole
2015-01-01
This report summarizes a project in which the authors sought to develop and deploy: (i) experimental techniques to elucidate the complex, multiscale nature of thermal transport in particle-based materials; and (ii) modeling approaches to address current challenges in predicting performace variability of materials (e.g., identifying and characterizing physical- chemical processes and their couplings across multiple length and time scales, modeling infor- mation transfer between scales, and statically and dynamically resolving material structure and its evolution during manufacturing and device performance). Experimentally, several capabilities were sucessfully advanced. As discussed in Chapter 2 a flash diffusivity capabil- ity for measuring homogeneous thermal conductivity of pyrotechnic powders (and beyond) was advanced; leading to enhanced characterization of pyrotechnic materials and properties impacting component development. Chapter 4 describes sucess for the first time, although preliminary, in resolving thermal fields at speeds and spatial scales relevant to energetic components. Chapter 7 summarizes the first ever (as far as the authors know) application of TDTR to actual pyrotechnic materials. This is the first attempt to actually characterize these materials at the interfacial scale. On the modeling side, new capabilities in image processing of experimental microstructures and direct numerical simulation on complicated structures were advanced (see Chapters 3 and 5). In addition, modeling work described in Chapter 8 led to improved prediction of interface thermal conductance from first principles calculations. Toward the second point, for a model system of packed particles, significant headway was made in implementing numerical algorithms and collecting data to justify the approach in terms of highlighting the phenomena at play and pointing the way forward in de- veloping and informing the kind of modeling approach oringinally envisioned (see Chapter 6
A Unified Multi-Scale Model for Pore-Scale Flow Simulations in Soils
Yang, Xiaofan; Liu, Chongxuan; Shang, Jianying; Fang, Yilin; Bailey, Vanessa L.
2014-01-30
Pore-scale simulations have received increasing interest in subsurface sciences to provide mechanistic insights into the macroscopic phenomena of water flow and reactive transport processes. The application of the pore scale simulations to soils and sediments is, however, challenged because of the characterization limitation that often only allows partial resolution of pore structure and geometry. A significant proportion of the pore space in soils and sediments is below the spatial resolution, forming a mixed media of pore and porous domains. Here we reported a unified multi-scale model (UMSM) that can be used to simulate water flow and transport in mixed media of pore and porous domains under both saturated and unsaturated conditions. The approach modifies the classic Navier-Stokes equation by adding a Darcy term to describe fluid momentum and uses a generalized mass balance equation for saturated and unsaturated conditions. By properly defining physical parameters, the UMSM can be applied in both pore and porous domains. This paper describes the set of equations for the UMSM, a series of validation cases under saturated or unsaturated conditions, and a real soil case for the application of the approach.
MULTI-SCALE MODELING AND APPROXIMATION ASSISTED OPTIMIZATION OF BARE TUBE HEAT EXCHANGERS
Bacellar, Daniel; Ling, Jiazhen; Aute, Vikrant; Radermacher, Reinhard; Abdelaziz, Omar
2014-01-01
Air-to-refrigerant heat exchangers are very common in air-conditioning, heat pump and refrigeration applications. In these heat exchangers, there is a great benefit in terms of size, weight, refrigerant charge and heat transfer coefficient, by moving from conventional channel sizes (~ 9mm) to smaller channel sizes (< 5mm). This work investigates new designs for air-to-refrigerant heat exchangers with tube outer diameter ranging from 0.5 to 2.0mm. The goal of this research is to develop and optimize the design of these heat exchangers and compare their performance with existing state of the art designs. The air-side performance of various tube bundle configurations are analyzed using a Parallel Parameterized CFD (PPCFD) technique. PPCFD allows for fast-parametric CFD analyses of various geometries with topology change. Approximation techniques drastically reduce the number of CFD evaluations required during optimization. Maximum Entropy Design method is used for sampling and Kriging method is used for metamodeling. Metamodels are developed for the air-side heat transfer coefficients and pressure drop as a function of tube-bundle dimensions and air velocity. The metamodels are then integrated with an air-to-refrigerant heat exchanger design code. This integration allows a multi-scale analysis of air-side performance heat exchangers including air-to-refrigerant heat transfer and phase change. Overall optimization is carried out using a multi-objective genetic algorithm. The optimal designs found can exhibit 50 percent size reduction, 75 percent decrease in air side pressure drop and doubled air heat transfer coefficients compared to a high performance compact micro channel heat exchanger with same capacity and flow rates.
Wei, Yaxing; Liu, Shishi; Huntzinger, Deborah N.; Michalak, Anna M.; Viovy, Nicolas; Post, Wilfred M.; Schwalm, Christopher R.; Schaeffer, Kevin; Jacobson, Andrew R.; Lu, Chaoqun; et al
2014-12-05
Ecosystems are important and dynamic components of the global carbon cycle, and terrestrial biospheric models (TBMs) are crucial tools in further understanding of how terrestrial carbon is stored and exchanged with the atmosphere across a variety of spatial and temporal scales. Improving TBM skills, and quantifying and reducing their estimation uncertainties, pose significant challenges. The Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP) is a formal multi-scale and multi-model intercomparison effort set up to tackle these challenges. The MsTMIP protocol prescribes standardized environmental driver data that are shared among model teams to facilitate model model and model observation comparisons. Inmore » this article, we describe the global and North American environmental driver data sets prepared for the MsTMIP activity to both support their use in MsTMIP and make these data, along with the processes used in selecting/processing these data, accessible to a broader audience. Based on project needs and lessons learned from past model intercomparison activities, we compiled climate, atmospheric CO2 concentrations, nitrogen deposition, land use and land cover change (LULCC), C3 / C4 grasses fractions, major crops, phenology and soil data into a standard format for global (0.5⁰ x 0.5⁰ resolution) and regional (North American: 0.25⁰ x 0.25⁰ resolution) simulations. In order to meet the needs of MsTMIP, improvements were made to several of the original environmental data sets, by improving the quality, and/or changing their spatial and temporal coverage, and resolution. The resulting standardized model driver data sets are being used by over 20 different models participating in MsTMIP. Lastly, the data are archived at the Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC, http://daac.ornl.gov) to provide long-term data management and distribution.« less
Huntzinger, D.N.; Schwalm, C.; Michalak, A.M; Schaefer, K.; King, A.W.; Wei, Y.; Jacobson, A.; Liu, S.; Cook, R.; Post, W.M.; Berthier, G.; Hayes, D.; Huang, M.; Ito, A.; Lei, H.; Lu, C.; Mao, J.; Peng, C.H.; Peng, S.; Poulter, B.; Riccuito, D.; Shi, X.; Tian, H.; Wang, W.; Zeng, N.; Zhao, F.; Zhu, Q.
2013-01-01
Terrestrial biosphere models (TBMs) have become an integral tool for extrapolating local observations and understanding of land-atmosphere carbon exchange to larger regions. The North American Carbon Program (NACP) Multi-scale synthesis and Terrestrial Model Intercomparison Project (MsTMIP) is a formal model intercomparison and evaluation effort focused on improving the diagnosis and attribution of carbon exchange at regional and global scales. MsTMIP builds upon current and past synthesis activities, and has a unique framework designed to isolate, interpret, and inform understanding of how model structural differences impact estimates of carbon uptake and release. Here we provide an overview of the MsTMIP effort and describe how the MsTMIP experimental design enables the assessment and quantification of TBM structural uncertainty. Model structure refers to the types of processes considered (e.g. nutrient cycling, disturbance, lateral transport of carbon), and how these processes are represented (e.g. photosynthetic formulation, temperature sensitivity, respiration) in the models. By prescribing a common experimental protocol with standard spin-up procedures and driver data sets, we isolate any biases and variability in TBM estimates of regional and global carbon budgets resulting from differences in the models themselves (i.e. model structure) and model-specific parameter values. An initial intercomparison of model structural differences is represented using hierarchical cluster diagrams (a.k.a. dendrograms), which highlight similarities and differences in how models account for carbon cycle, vegetation, energy, and nitrogen cycle dynamics. We show that, despite the standardized protocol used to derive initial conditions, models show a high degree of variation for GPP, total living biomass, and total soil carbon, underscoring the influence of differences in model structure and parameterization on model estimates.
Multiscale modeling and characterization for performance and safety of lithium-ion batteries
Pannala, Sreekanth; Turner, John A.; Allu, Srikanth; Elwasif, Wael R.; Kalnaus, Sergiy; Simunovic, Srdjan; Kumar, Abhishek; Billings, Jay Jay; Wang, Hsin; Nanda, Jagjit
2015-08-19
Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. In this paper we describe a new, open source computational framework for Lithium-ion battery simulations that is designed to support a variety of model types and formulations. This framework has been used to create three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. The model development and validation are supported by experimental methods such as IR-imaging, X-ray tomography and micro-Raman mapping.
Multiscale modeling and characterization for performance and safety of lithium-ion batteries
Pannala, Sreekanth; Turner, John A.; Allu, Srikanth; Elwasif, Wael R.; Kalnaus, Sergiy; Simunovic, Srdjan; Kumar, Abhishek; Billings, Jay Jay; Wang, Hsin; Nanda, Jagjit
2015-08-19
Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. In this paper we describe a new, open source computational framework for Lithium-ion battery simulations that is designed to support a variety of model types and formulations. This framework has been used to create three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. The modelmore » development and validation are supported by experimental methods such as IR-imaging, X-ray tomography and micro-Raman mapping.« less
Welz, Oliver; Burke, Michael P.; Antonov, Ivan O.; Goldsmith, C. Franklin; Savee, John David; Osborn, David L.; Taatjes, Craig A.; Klippenstein, Stephen J.; Sheps, Leonid
2015-04-10
We studied low-temperature propane oxidation at P = 4 Torr and T = 530, 600, and 670 K by time-resolved multiplexed photoionization mass spectrometry (MPIMS), which probes the reactants, intermediates, and products with isomeric selectivity using tunable synchrotron vacuum UV ionizing radiation. The oxidation is initiated by pulsed laser photolysis of oxalyl chloride, (COCl)_{2}, at 248 nm, which rapidly generates a ~1:1 mixture of 1-propyl (n-propyl) and 2-propyl (i-propyl) radicals via the fast Cl + propane reaction. At all three temperatures, the major stable product species is propene, formed in the propyl + O_{2} reactions by direct HO_{2} elimination from both n- and i-propyl peroxy radicals. The experimentally derived propene yields relative to the initial concentration of Cl atoms are (20 4)% at 530 K, (55 11)% at 600 K, and (86 17)% at 670 K at a reaction time of 20 ms. The lower yield of propene at low temperature reflects substantial formation of propyl peroxy radicals, which do not completely decompose on the experimental time scale. In addition, C_{3}H_{6}O isomers methyloxirane, oxetane, acetone, and propanal are detected as minor products. Our measured yields of oxetane and methyloxirane, which are coproducts of OH radicals, suggest a revision of the OH formation pathways in models of low-temperature propane oxidation. The experimental results are modeled and interpreted using a multiscale informatics approach, presented in detail in a separate publication (Burke, M. P.; Goldsmith, C. F.; Klippenstein, S. J.; Welz, O.; Huang H.; Antonov I. O.; Savee J. D.; Osborn D. L.; Zdor, J.; Taatjes, C. A.; Sheps, L. Multiscale Informatics for Low-Temperature Propane Oxidation: Further Complexities in Studies of Complex Reactions. J. Phys. Chem A. 2015, DOI: 10.1021/acs.jpca.5b01003). Additionally, we found that the model predicts the time profiles and yields of the experimentally observed primary products well, and
Welz, Oliver; Burke, Michael P.; Antonov, Ivan O.; Goldsmith, C. Franklin; Savee, John David; Osborn, David L.; Taatjes, Craig A.; Klippenstein, Stephen J.; Sheps, Leonid
2015-04-10
We studied low-temperature propane oxidation at P = 4 Torr and T = 530, 600, and 670 K by time-resolved multiplexed photoionization mass spectrometry (MPIMS), which probes the reactants, intermediates, and products with isomeric selectivity using tunable synchrotron vacuum UV ionizing radiation. The oxidation is initiated by pulsed laser photolysis of oxalyl chloride, (COCl)2, at 248 nm, which rapidly generates a ~1:1 mixture of 1-propyl (n-propyl) and 2-propyl (i-propyl) radicals via the fast Cl + propane reaction. At all three temperatures, the major stable product species is propene, formed in the propyl + O2 reactions by direct HO2 elimination frommore » both n- and i-propyl peroxy radicals. The experimentally derived propene yields relative to the initial concentration of Cl atoms are (20 ± 4)% at 530 K, (55 ± 11)% at 600 K, and (86 ± 17)% at 670 K at a reaction time of 20 ms. The lower yield of propene at low temperature reflects substantial formation of propyl peroxy radicals, which do not completely decompose on the experimental time scale. In addition, C3H6O isomers methyloxirane, oxetane, acetone, and propanal are detected as minor products. Our measured yields of oxetane and methyloxirane, which are coproducts of OH radicals, suggest a revision of the OH formation pathways in models of low-temperature propane oxidation. The experimental results are modeled and interpreted using a multiscale informatics approach, presented in detail in a separate publication (Burke, M. P.; Goldsmith, C. F.; Klippenstein, S. J.; Welz, O.; Huang H.; Antonov I. O.; Savee J. D.; Osborn D. L.; Zádor, J.; Taatjes, C. A.; Sheps, L. Multiscale Informatics for Low-Temperature Propane Oxidation: Further Complexities in Studies of Complex Reactions. J. Phys. Chem A. 2015, DOI: 10.1021/acs.jpca.5b01003). Additionally, we found that the model predicts the time profiles and yields of the experimentally observed primary products well, and shows satisfactory agreement for products
Welz, Oliver; Burke, Michael P.; Antonov, Ivan O.; Goldsmith, C. Franklin; Savee, John David; Osborn, David L.; Taatjes, Craig A.; Klippenstein, Stephen J.; Sheps, Leonid
2015-04-10
We studied low-temperature propane oxidation at P = 4 Torr and T = 530, 600, and 670 K by time-resolved multiplexed photoionization mass spectrometry (MPIMS), which probes the reactants, intermediates, and products with isomeric selectivity using tunable synchrotron vacuum UV ionizing radiation. The oxidation is initiated by pulsed laser photolysis of oxalyl chloride, (COCl)_{2}, at 248 nm, which rapidly generates a ~1:1 mixture of 1-propyl (n-propyl) and 2-propyl (i-propyl) radicals via the fast Cl + propane reaction. At all three temperatures, the major stable product species is propene, formed in the propyl + O_{2} reactions by direct HO_{2} elimination from both n- and i-propyl peroxy radicals. The experimentally derived propene yields relative to the initial concentration of Cl atoms are (20 ± 4)% at 530 K, (55 ± 11)% at 600 K, and (86 ± 17)% at 670 K at a reaction time of 20 ms. The lower yield of propene at low temperature reflects substantial formation of propyl peroxy radicals, which do not completely decompose on the experimental time scale. In addition, C_{3}H_{6}O isomers methyloxirane, oxetane, acetone, and propanal are detected as minor products. Our measured yields of oxetane and methyloxirane, which are coproducts of OH radicals, suggest a revision of the OH formation pathways in models of low-temperature propane oxidation. The experimental results are modeled and interpreted using a multiscale informatics approach, presented in detail in a separate publication (Burke, M. P.; Goldsmith, C. F.; Klippenstein, S. J.; Welz, O.; Huang H.; Antonov I. O.; Savee J. D.; Osborn D. L.; Zádor, J.; Taatjes, C. A.; Sheps, L. Multiscale Informatics for Low-Temperature Propane Oxidation: Further Complexities in Studies of Complex Reactions. J. Phys. Chem A. 2015, DOI: 10.1021/acs.jpca.5b01003). Additionally, we found that the model predicts the time profiles and yields of the experimentally observed primary products well
Wang, Minghuai; Larson, Vincent E.; Ghan, Steven J.; Ovchinnikov, Mikhail; Schanen, D.; Xiao, Heng; Liu, Xiaohong; Rasch, Philip J.; Guo, Zhun
2015-06-01
In this study, a higher-order turbulence closure scheme, called Cloud Layers Unified by Binormals (CLUBB), is implemented into a Multi-scale Modeling Framework (MMF) model to improve low cloud simulations. The performance of CLUBB in MMF simulations with two different microphysics configurations (one-moment cloud microphysics without aerosol treatment and two-moment cloud microphysics coupled with aerosol treatment) is evaluated against observations and further compared with results from the Community Atmosphere Model, Version 5 (CAM5) with conventional cloud parameterizations. CLUBB is found to improve low cloud simulations in the MMF, and the improvement is particularly evident in the stratocumulus-to-cumulus transition regions. Compared to the single-moment cloud microphysics, CLUBB with two-moment microphysics produces clouds that are closer to the coast, and agrees better with observations. In the stratocumulus-to cumulus transition regions, CLUBB with two-moment cloud microphysics produces shortwave cloud forcing in better agreement with observations, while CLUBB with single moment cloud microphysics overestimates shortwave cloud forcing. CLUBB is further found to produce quantitatively similar improvements in the MMF and CAM5, with slightly better performance in the MMF simulations (e.g., MMF with CLUBB generally produces low clouds that are closer to the coast than CAM5 with CLUBB). Improved low cloud simulations in MMF make it an even more attractive tool for studying aerosol-cloud-precipitation interactions.
A multiscale MDCT image-based breathing lung model with time-varying regional ventilation
Yin, Youbing, E-mail: youbing-yin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States) [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Choi, Jiwoong, E-mail: jiwoong-choi@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States) [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Hoffman, Eric A., E-mail: eric-hoffman@uiowa.edu [Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Internal Medicine, The University of Iowa, Iowa City, IA 52242 (United States); Tawhai, Merryn H., E-mail: m.tawhai@auckland.ac.nz [Auckland Bioengineering Institute, The University of Auckland, Auckland (New Zealand); Lin, Ching-Long, E-mail: ching-long-lin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States) [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States)
2013-07-01
A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C{sub 1} continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung.
Grell, Georg; Fast, Jerome D.; Gustafson, William I.; Peckham, Steven E.; McKeen, Stuart A.; Salzmann, Marc; Freitas, Saulo
2010-01-01
This is a conference proceeding that is now being put together as a book. This is chapter 2 of the book: "INTEGRATED SYSTEMS OF MESO-METEOROLOGICAL AND CHEMICAL TRANSPORT MODELS" published by Springer. The chapter title is "On-line Chemistry within WRF: Description and Evaluation of a State-of-the-Art Multiscale Air Quality and Weather Prediction Model." The original conference was the COST-728/NetFAM workshop on Integrated systems of meso-meteorological and chemical transport models, Danish Meteorological Institute, Copenhagen, May 21-23, 2007.
Freed, Alan D.; Einstein, Daniel R.; Carson, James P.; Jacob, Rick E.
2012-03-01
In the first year of this contractual effort a hypo-elastic constitutive model was developed and shown to have great potential in modeling the elastic response of parenchyma. This model resides at the macroscopic level of the continuum. In this, the second year of our support, an isotropic dodecahedron is employed as an alveolar model. This is a microscopic model for parenchyma. A hopeful outcome is that the linkage between these two scales of modeling will be a source of insight and inspiration that will aid us in the final year's activity: creating a viscoelastic model for parenchyma.
Multiscale Simulation Framework for Coupled Fluid Flow and Mechanical Deformation
Tchelepi, Hamdi
2014-11-14
A multiscale linear-solver framework for the pressure equation associated with flow in highly heterogeneous porous formations was developed. The multiscale based approach is cast in a general algebraic form, which facilitates integration of the new scalable linear solver in existing flow simulators. The Algebraic Multiscale Solver (AMS) is employed as a preconditioner within a multi-stage strategy. The formulations investigated include the standard MultiScale Finite-Element (MSFE) andMultiScale Finite-Volume (MSFV) methods. The local-stage solvers include incomplete factorization and the so-called Correction Functions (CF) associated with the MSFV approach. Extensive testing of AMS, as an iterative linear solver, indicate excellent convergence rates and computational scalability. AMS compares favorably with advanced Algebraic MultiGrid (AMG) solvers for highly detailed three-dimensional heterogeneous models. Moreover, AMS is expected to be especially beneficial in solving time-dependent problems of coupled multiphase flow and transport in large-scale subsurface formations.
Behafarid, F.; Shaver, D. R.; Bolotnov, I. A.; Jansen, K. E.; Antal, S. P.; Podowski, M. Z.
2012-07-01
The required technological and safety standards for future Gen IV Reactors can only be achieved if advanced simulation capabilities become available, which combine high performance computing with the necessary level of modeling detail and high accuracy of predictions. The purpose of this paper is to present new results of multi-scale three-dimensional (3D) simulations of the inter-related phenomena, which occur as a result of fuel element heat-up and cladding failure, including the injection of a jet of gaseous fission products into a partially blocked Sodium Fast Reactor (SFR) coolant channel, and gas/molten sodium transport along the coolant channels. The computational approach to the analysis of the overall accident scenario is based on using two different inter-communicating computational multiphase fluid dynamics (CMFD) codes: a CFD code, PHASTA, and a RANS code, NPHASE-CMFD. Using the geometry and time history of cladding failure and the gas injection rate, direct numerical simulations (DNS), combined with the Level Set method, of two-phase turbulent flow have been performed by the PHASTA code. The model allows one to track the evolution of gas/liquid interfaces at a centimeter scale. The simulated phenomena include the formation and breakup of the jet of fission products injected into the liquid sodium coolant. The PHASTA outflow has been averaged over time to obtain mean phasic velocities and volumetric concentrations, as well as the liquid turbulent kinetic energy and turbulence dissipation rate, all of which have served as the input to the core-scale simulations using the NPHASE-CMFD code. A sliding window time averaging has been used to capture mean flow parameters for transient cases. The results presented in the paper include testing and validation of the proposed models, as well the predictions of fission-gas/liquid-sodium transport along a multi-rod fuel assembly of SFR during a partial loss-of-flow accident. (authors)
Modeling, Analysis and Simulation of Multiscale Preferential Flow - 8/05-8/10 - Final Report
Ralph Showalter; Malgorzata Peszynska
2012-07-03
The research agenda of this project are: (1) Modeling of preferential transport from mesoscale to macroscale; (2) Modeling of fast flow in narrow fractures in porous media; (3) Pseudo-parabolic Models of Dynamic Capillary Pressure; (4) Adaptive computational upscaling of flow with inertia from porescale to mesoscale; (5) Adaptive modeling of nonlinear coupled systems; and (6) Adaptive modeling and a-posteriori estimators for coupled systems with heterogeneous data.
Multi-scale Modelling of bcc-Fe Based Alloys for Nuclear Applications
Malerba, Lorenzo
2008-07-01
, advanced techniques to fit interatomic potentials consistent with thermodynamics are proposed and the results of their application to the mentioned alloys are presented. Next, the development of advanced methods, based on the use of artificial intelligence, to improve both the physical reliability and the computational efficiency of kinetic Monte Carlo codes for the study of point-defect clustering and phase changes beyond the scale of MD, is reported. These recent progresses bear the promise of being able, in the near future, of producing reliable tools for the description of the microstructure evolution of realistic model alloys under irradiation. (author)
Costigan, Keeley Rochelle; Sauer, Jeremy A.; Dubey, Manvendra Krishna
2015-07-10
This report discusses the ghgas IC project which when applied, allows for an evaluation of LANL's HIGRAD model which can be used to create atmospheric simulations.
Evaluation of the Multi-scale Modeling Framework Using Data from...
Office of Scientific and Technical Information (OSTI)
Unfortunately, the traditional parametric approach of diagnosing cloud and radiation properties for gridcells that are tens to hundreds kilometers across from large-scale model ...
The Radiative Properties of Small Clouds: Multi-Scale Observations and Modeling
Feingold, Graham; McComiskey, Allison
2013-09-25
Warm, liquid clouds and their representation in climate models continue to represent one of the most significant unknowns in climate sensitivity and climate change. Our project combines ARM observations, LES modeling, and satellite imagery to characterize shallow clouds and the role of aerosol in modifying their radiative effects.
Multiscale modeling and characterization for performance and safety of lithium-ion batteries
Pannala, S. Turner, J. A.; Allu, S.; Elwasif, W. R.; Kalnaus, S.; Simunovic, S.; Kumar, A.; Billings, J. J.; Wang, H.; Nanda, J.
2015-08-21
Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. Gaining an understanding of the role of these processes as well as development of predictive capabilities for design of better performing batteries requires synergy between theory, modeling, and simulation, and fundamental experimental work to support the models. This paper presents the overview of the work performed by the authors aligned with both experimental and computational efforts. In this paper, we describe a new, open source computational environment for battery simulations with an initial focus on lithium-ion systems but designed to support a variety of model types and formulations. This system has been used to create a three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. This paper also provides an overview of the experimental techniques to obtain crucial validation data to benchmark the simulations at various scales for performance as well as abuse. We detail some initial validation using characterization experiments such as infrared and neutron imaging and micro-Raman mapping. In addition, we identify opportunities for future integration of theory, modeling, and experiments.
Multiscale Universal Interface: A concurrent framework for coupling heterogeneous solvers
Tang, Yu-Hang; Kudo, Shuhei; Bian, Xin; Li, Zhen; Karniadakis, George Em
2015-09-15
Graphical abstract: - Abstract: Concurrently coupled numerical simulations using heterogeneous solvers are powerful tools for modeling multiscale phenomena. However, major modifications to existing codes are often required to enable such simulations, posing significant difficulties in practice. In this paper we present a C++ library, i.e. the Multiscale Universal Interface (MUI), which is capable of facilitating the coupling effort for a wide range of multiscale simulations. The library adopts a header-only form with minimal external dependency and hence can be easily dropped into existing codes. A data sampler concept is introduced, combined with a hybrid dynamic/static typing mechanism, to create an easily customizable framework for solver-independent data interpretation. The library integrates MPI MPMD support and an asynchronous communication protocol to handle inter-solver information exchange irrespective of the solvers' own MPI awareness. Template metaprogramming is heavily employed to simultaneously improve runtime performance and code flexibility. We validated the library by solving three different multiscale problems, which also serve to demonstrate the flexibility of the framework in handling heterogeneous models and solvers. In the first example, a Couette flow was simulated using two concurrently coupled Smoothed Particle Hydrodynamics (SPH) simulations of different spatial resolutions. In the second example, we coupled the deterministic SPH method with the stochastic Dissipative Particle Dynamics (DPD) method to study the effect of surface grafting on the hydrodynamics properties on the surface. In the third example, we consider conjugate heat transfer between a solid domain and a fluid domain by coupling the particle-based energy-conserving DPD (eDPD) method with the Finite Element Method (FEM)
Micro-structural modeling tools for metals are being developed and used to demonstrate a design framework to improve the understanding of dynamic response and statistical variability. This project will enable design engineers to evaluate the effects of design changes and material selection; anticipate quality and cost prior to implementation on the factory floor; and enable low-waste, low-cost manufacturing. Third Wave Systems, Inc. - Minneapolis, MN
Understanding Creep Mechanisms in Graphite with Experiments, Multiscale Simulations, and Modeling
Eapen, Jacob; Murty, Korukonda; Burchell, Timothy
2014-06-02
Disordering mechanisms in graphite have a long history with conflicting viewpoints. Using Raman and x-ray photon spectroscopy, electron microscopy, x-ray diffraction experiments and atomistic modeling and simulations, the current project has developed a fundamental understanding of early-to-late state radiation damage mechanisms in nuclear reactor grade graphite (NBG-18 and PCEA). We show that the topological defects in graphite play an important role under neutron and ion irradiation.
SUSTAINABLE MANUFACTURING VIA MULTI-SCALE PHYSICS-BASED PROCESS MODELING
AND MANUFACTURING-INFORMED DESIGN | Department of Energy Third Wave Systems, Inc. - Minneapolis, MN Micro-structural modeling tools for metals are being developed and used to demonstrate a design framework to improve the understanding of dynamic response and statistical variability. This project will enable design engineers to evaluate the effects of design changes and material selection; anticipate quality and cost prior to implementation on the factory floor; and enable low-waste, low-cost
Multiscale modeling of thermal conductivity of high burnup structures in UO_{2} fuels
Bai, Xian -Ming; Tonks, Michael R.; Zhang, Yongfeng; Hales, Jason D.
2015-12-22
The high burnup structure forming at the rim region in UO_{2} based nuclear fuel pellets has interesting physical properties such as improved thermal conductivity, even though it contains a high density of grain boundaries and micron-size gas bubbles. To understand this counterintuitive phenomenon, mesoscale heat conduction simulations with inputs from atomistic simulations and experiments were conducted to study the thermal conductivities of a small-grain high burnup microstructure and two large-grain unrestructured microstructures. We concluded that the phonon scattering effects caused by small point defects such as dispersed Xe atoms in the grain interior must be included in order to correctly predict the thermal transport properties of these microstructures. In extreme cases, even a small concentration of dispersed Xe atoms such as 10^{-5} can result in a lower thermal conductivity in the large-grain unrestructured microstructures than in the small-grain high burnup structure. The high-density grain boundaries in a high burnup structure act as defect sinks and can reduce the concentration of point defects in its grain interior and improve its thermal conductivity in comparison with its large-grain counterparts. Furthermore, an analytical model was developed to describe the thermal conductivity at different concentrations of dispersed Xe, bubble porosities, and grain sizes. Upon calibration, the model is robust and agrees well with independent heat conduction modeling over a wide range of microstructural parameters.
Multiscale modeling of thermal conductivity of high burnup structures in UO2 fuels
Bai, Xian -Ming; Tonks, Michael R.; Zhang, Yongfeng; Hales, Jason D.
2015-12-22
The high burnup structure forming at the rim region in UO2 based nuclear fuel pellets has interesting physical properties such as improved thermal conductivity, even though it contains a high density of grain boundaries and micron-size gas bubbles. To understand this counterintuitive phenomenon, mesoscale heat conduction simulations with inputs from atomistic simulations and experiments were conducted to study the thermal conductivities of a small-grain high burnup microstructure and two large-grain unrestructured microstructures. We concluded that the phonon scattering effects caused by small point defects such as dispersed Xe atoms in the grain interior must be included in order to correctlymore » predict the thermal transport properties of these microstructures. In extreme cases, even a small concentration of dispersed Xe atoms such as 10-5 can result in a lower thermal conductivity in the large-grain unrestructured microstructures than in the small-grain high burnup structure. The high-density grain boundaries in a high burnup structure act as defect sinks and can reduce the concentration of point defects in its grain interior and improve its thermal conductivity in comparison with its large-grain counterparts. Furthermore, an analytical model was developed to describe the thermal conductivity at different concentrations of dispersed Xe, bubble porosities, and grain sizes. Upon calibration, the model is robust and agrees well with independent heat conduction modeling over a wide range of microstructural parameters.« less
Swift, D. C.; Paisley, Dennis L.; Kyrala, George A.; Hauer, Allan
2002-01-01
Ab initio quantum mechanics was used to construct a thermodynamically complete and rigorous equation of state for beryllium in the hexagonal and body-centred cubic structures, and to predict elastic constants as a function of compression. The equation of state agreed well with Hugoniot data and previously-published equations of state, but the temperatures were significantly different. The hexagonal/bcc phase boundary agreed reasonably well with published data, suggesting that the temperatures in our new equation of state were accurate. Shock waves were induced in single crystals and polycrystalline foils of beryllium, by direct illumination using the TRIDENT laser at Los Alamos. The velocity history at the surface of the sample was measured using a line-imaging VISAR, and transient X-ray diffraction (TXD) records were obtained with a plasma backlighter and X-ray streak cameras. The VISAR records exhibited elastic precursors, plastic waves, phase changes and spall. Dual TXD records were taken, in Bragg and Laue orientations. The Bragg lines moved in response to compression in the uniaxial direction. Because direct laser drive was used, the results had to be interpreted with the aid of radiation hydrodynamics simulations to predict the loading history for each laser pulse. In the experiments where there was evidence of polymorphism in the VISAR record, additional lines appeared in the Bragg and Laue records. The corresponding pressures were consistent with the phase boundary predicted by the quantum mechanical equation of state for beryllium. A model of the response of a single crystal of beryllium to shock loading is being developed using these new theoretical and experimental results. This model will be used in meso-scale studies of the response of the microstructure, allowing us to develop a more accurate representation of the behaviour of polycrystalline beryllium.
Uncertainty quantification and multiscale mathematics. (Conference...
Office of Scientific and Technical Information (OSTI)
quantification and multiscale mathematics. Citation Details In-Document Search Title: Uncertainty quantification and multiscale mathematics. Authors: Trucano, Timothy Guy ...
Uncertainty quantification and multiscale mathematics. (Conference...
Office of Scientific and Technical Information (OSTI)
Uncertainty quantification and multiscale mathematics. Citation Details In-Document Search Title: Uncertainty quantification and multiscale mathematics. No abstract prepared. ...
Plimpton, Steven J.; Sershen, Cheryl L.; May, Elebeoba E.
2015-01-01
This paper describes a method for incorporating a diffusion field modeling oxygen usage and dispersion in a multi-scale model of Mycobacterium tuberculosis (Mtb) infection mediated granuloma formation. We implemented this method over a floating-point field to model oxygen dynamics in host tissue during chronic phase response and Mtb persistence. The method avoids the requirement of satisfying the Courant-Friedrichs-Lewy (CFL) condition, which is necessary in implementing the explicit version of the finite-difference method, but imposes an impractical bound on the time step. Instead, diffusion is modeled by a matrix-based, steady state approximate solution to the diffusion equation. Moreover, presented in figure 1 is the evolution of the diffusion profiles of a containment granuloma over time.
Plimpton, Steven J.; Sershen, Cheryl L.; May, Elebeoba E.
2015-01-01
This paper describes a method for incorporating a diffusion field modeling oxygen usage and dispersion in a multi-scale model of Mycobacterium tuberculosis (Mtb) infection mediated granuloma formation. We implemented this method over a floating-point field to model oxygen dynamics in host tissue during chronic phase response and Mtb persistence. The method avoids the requirement of satisfying the Courant-Friedrichs-Lewy (CFL) condition, which is necessary in implementing the explicit version of the finite-difference method, but imposes an impractical bound on the time step. Instead, diffusion is modeled by a matrix-based, steady state approximate solution to the diffusion equation. Moreover, presented in figuremore » 1 is the evolution of the diffusion profiles of a containment granuloma over time.« less
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
Weston, David; Hanson, Paul J; Norby, Richard J; Tuskan, Gerald A; Wullschleger, Stan D
2012-01-01
Network analysis is now a common statistical tool for molecular biologists. Network algorithms are readily used to model gene, protein and metabolic correlations providing insight into pathways driving biological phenomenon. One output from such an analysis is a candidate gene list that can be responsible, in part, for the biological process of interest. The question remains, however, as to whether molecular network analysis can be used to inform process models at higher levels of biological organization. In our previous work, transcriptional networks derived from three plant species were constructed, interrogated for orthology and then correlated to photosynthetic inhibition at elevated temperature. One unique aspect of that study was the link from co-expression networks to net photosynthesis. In this addendum, we propose a conceptual model where traditional network analysis can be linked to whole-plant models thereby informing predictions on key processes such as photosynthesis, nutrient uptake and assimilation, and C partitioning.
A Hybrid Multiscale Framework for Subsurface Flow and Transport Simulations
Scheibe, Timothy D.; Yang, Xiaofan; Chen, Xingyuan; Hammond, Glenn E.
2015-06-01
Extensive research efforts have been invested in reducing model errors to improve the predictive ability of biogeochemical earth and environmental system simulators, with applications ranging from contaminant transport and remediation to impacts of biogeochemical elemental cycling (e.g., carbon and nitrogen) on local ecosystems and regional to global climate. While the bulk of this research has focused on improving model parameterizations in the face of observational limitations, the more challenging type of model error/uncertainty to identify and quantify is model structural error which arises from incorrect mathematical representations of (or failure to consider) important physical, chemical, or biological processes, properties, ormore » system states in model formulations. While improved process understanding can be achieved through scientific study, such understanding is usually developed at small scales. Process-based numerical models are typically designed for a particular characteristic length and time scale. For application-relevant scales, it is generally necessary to introduce approximations and empirical parameterizations to describe complex systems or processes. This single-scale approach has been the best available to date because of limited understanding of process coupling combined with practical limitations on system characterization and computation. While computational power is increasing significantly and our understanding of biological and environmental processes at fundamental scales is accelerating, using this information to advance our knowledge of the larger system behavior requires the development of multiscale simulators. Accordingly there has been much recent interest in novel multiscale methods in which microscale and macroscale models are explicitly coupled in a single hybrid multiscale simulation. A limited number of hybrid multiscale simulations have been developed for biogeochemical earth systems, but they mostly utilize application
A Hybrid Multiscale Framework for Subsurface Flow and Transport Simulations
Scheibe, Timothy D.; Yang, Xiaofan; Chen, Xingyuan; Hammond, Glenn E.
2015-06-01
Extensive research efforts have been invested in reducing model errors to improve the predictive ability of biogeochemical earth and environmental system simulators, with applications ranging from contaminant transport and remediation to impacts of biogeochemical elemental cycling (e.g., carbon and nitrogen) on local ecosystems and regional to global climate. While the bulk of this research has focused on improving model parameterizations in the face of observational limitations, the more challenging type of model error/uncertainty to identify and quantify is model structural error which arises from incorrect mathematical representations of (or failure to consider) important physical, chemical, or biological processes, properties, ormoresystem states in model formulations. While improved process understanding can be achieved through scientific study, such understanding is usually developed at small scales. Process-based numerical models are typically designed for a particular characteristic length and time scale. For application-relevant scales, it is generally necessary to introduce approximations and empirical parameterizations to describe complex systems or processes. This single-scale approach has been the best available to date because of limited understanding of process coupling combined with practical limitations on system characterization and computation. While computational power is increasing significantly and our understanding of biological and environmental processes at fundamental scales is accelerating, using this information to advance our knowledge of the larger system behavior requires the development of multiscale simulators. Accordingly there has been much recent interest in novel multiscale methods in which microscale and macroscale models are explicitly coupled in a single hybrid multiscale simulation. A limited number of hybrid multiscale simulations have been developed for biogeochemical earth systems, but they mostly utilize application
A Hybrid Multiscale Framework for Subsurface Flow and Transport Simulations
Scheibe, Timothy D.; Yang, Xiaofan; Chen, Xingyuan; Hammond, Glenn E.
2015-06-01
Extensive research efforts have been invested in reducing model errors to improve the predictive ability of biogeochemical earth and environmental system simulators, with applications ranging from contaminant transport and remediation to impacts of biogeochemical elemental cycling (e.g., carbon and nitrogen) on local ecosystems and regional to global climate. While the bulk of this research has focused on improving model parameterizations in the face of observational limitations, the more challenging type of model error/uncertainty to identify and quantify is model structural error which arises from incorrect mathematical representations of (or failure to consider) important physical, chemical, or biological processes, properties, or system states in model formulations. While improved process understanding can be achieved through scientific study, such understanding is usually developed at small scales. Process-based numerical models are typically designed for a particular characteristic length and time scale. For application-relevant scales, it is generally necessary to introduce approximations and empirical parameterizations to describe complex systems or processes. This single-scale approach has been the best available to date because of limited understanding of process coupling combined with practical limitations on system characterization and computation. While computational power is increasing significantly and our understanding of biological and environmental processes at fundamental scales is accelerating, using this information to advance our knowledge of the larger system behavior requires the development of multiscale simulators. Accordingly there has been much recent interest in novel multiscale methods in which microscale and macroscale models are explicitly coupled in a single hybrid multiscale simulation. A limited number of hybrid multiscale simulations have been developed for biogeochemical earth systems, but they mostly utilize application
Graph modeling systems and methods
Neergaard, Mike
2015-10-13
An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.
Niyogi, Devdutta S.
2013-06-07
The CLASIC experiment was conducted over the US southern great plains (SGP) in June 2007 with an objective to lead an enhanced understanding of the cumulus convection particularly as it relates to land surface conditions. This project was design to help assist with understanding the overall improvement of land atmosphere convection initiation representation of which is important for global and regional models. The study helped address one of the critical documented deficiency in the models central to the ARM objectives for cumulus convection initiation and particularly under summer time conditions. This project was guided by the scientific question building on the CLASIC theme questions: What is the effect of improved land surface representation on the ability of coupled models to simulate cumulus and convection initiation? The focus was on the US Southern Great Plains region. Since the CLASIC period was anomalously wet the strategy has been to use other periods and domains to develop the comparative assessment for the CLASIC data period, and to understand the mechanisms of the anomalous wet conditions on the tropical systems and convection over land. The data periods include the IHOP 2002 field experiment that was over roughly same domain as the CLASIC in the SGP, and some of the DOE funded Ameriflux datasets.
Multiscale Molecular Simulations at the Petascale (Parallelization of
Office of Scientific and Technical Information (OSTI)
Reactive Force Field Model for Blue Gene/Q): ALCF-2 Early Science Program Technical Report (Technical Report) | SciTech Connect Multiscale Molecular Simulations at the Petascale (Parallelization of Reactive Force Field Model for Blue Gene/Q): ALCF-2 Early Science Program Technical Report Citation Details In-Document Search Title: Multiscale Molecular Simulations at the Petascale (Parallelization of Reactive Force Field Model for Blue Gene/Q): ALCF-2 Early Science Program Technical Report
Yu, Sungduk; Pritchard, Michael S.
2015-12-17
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m^{2}) and longwave cloud forcing (~5 W/m^{2}) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.
Yu, Sungduk; Pritchard, Michael S.
2015-12-17
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m2) and longwave cloud forcing (~5 W/m2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfullymore » satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less
Parallel Paradigm for Ultraparallel Multi-Scale Brain Blood Flow
U.S. Department of Energy (DOE) all webpages (Extended Search)
Simulations | Argonne Leadership Computing Facility Parallel Paradigm for Ultraparallel Multi-Scale Brain Blood Flow Simulations Authors: Grinberg, L., Karniadakis, G.E. In this paper we present one approach in building a scalable solver NekTarG for solution of multi-scale and large size problems [1]. NekTarG has been designed for multi-scale blood modeling. The macro-vascular scales describing the flow dynamics in large vessels are coupled to the mesovascular scales unfolding dynamics of
Multi-Scale Simulations Solve a Plasma Turbulence Mystery
U.S. Department of Energy (DOE) all webpages (Extended Search)
Multi-Scale Simulations Solve a Plasma Turbulence Mystery Multi-Scale Simulations Solve a Plasma Turbulence Mystery Coupled Model Reproduces Experimental Electron Heat Losses March 7, 2016 Contact: Kathy Kincade, kkincade@lbl.gov, +1 510 495 2124 turb cross High-res image of the inside of the Alcator C-Mod tokamak, with a representative cross-section of a plasma. The inset shows the approximate domain for one of the multi-scale simulations and a graphic of the plasma turbulence in the
Multiscale reactive molecular dynamics | Argonne Leadership Computing
U.S. Department of Energy (DOE) all webpages (Extended Search)
Facility reactive molecular dynamics Authors: Chris KnighT, Gerrick E. Lindberg, Gregory A. Voth Many processes important to chemistry, materials science, and biology cannot be described without considering electronic and nuclear-level dynamics and their coupling to slower, cooperative motions of the system. These inherently multiscale problems require computationally efficient and accurate methods to converge statistical properties. In this paper, a method is presented that uses data
Andrade, JosÃÂ© E; Rudnicki, John W
2012-12-14
In this project, a predictive multiscale framework will be developed to simulate the strong coupling between solid deformations and fluid diffusion in porous rocks. We intend to improve macroscale modeling by incorporating fundamental physical modeling at the microscale in a computationally efficient way. This is an essential step toward further developments in multiphysics modeling, linking hydraulic, thermal, chemical, and geomechanical processes. This research will focus on areas where severe deformations are observed, such as deformation bands, where classical phenomenology breaks down. Multiscale geometric complexities and key geomechanical and hydraulic attributes of deformation bands (e.g., grain sliding and crushing, and pore collapse, causing interstitial fluid expulsion under saturated conditions), can significantly affect the constitutive response of the skeleton and the intrinsic permeability. Discrete mechanics (DEM) and the lattice Boltzmann method (LBM) will be used to probe the microstructure---under the current state---to extract the evolution of macroscopic constitutive parameters and the permeability tensor. These evolving macroscopic constitutive parameters are then directly used in continuum scale predictions using the finite element method (FEM) accounting for the coupled solid deformation and fluid diffusion. A particularly valuable aspect of this research is the thorough quantitative verification and validation program at different scales. The multiscale homogenization framework will be validated using X-ray computed tomography and 3D digital image correlation in situ at the Advanced Photon Source in Argonne National Laboratories. Also, the hierarchical computations at the specimen level will be validated using the aforementioned techniques in samples of sandstone undergoing deformation bands.
A Multiscale Bidirectional Coupling Framework
Kabilan, Senthil; Kuprat, Andrew P.; Hlastala, Michael P.; Corley, Richard A.; Einstein, Daniel R.
2011-12-01
The lung is geometrically articulated across multiple scales from the trachea to the alveoli. A major computational challenge is to tightly link ODEs that describe lower scales to 3D finite element or finite volume models of airway mechanics using iterative communication between scales. In this study, we developed a novel multiscale computational framework for bidirectionally coupling 3D CFD models and systems of lower order ODEs. To validate the coupling framework, a four and eight generation Weibel lung model was constructed. For the coupled CFD-ODE simulations, the lung models were truncated at different generations and a RL circuit represented the truncated portion. The flow characteristics from the coupled models were compared to untruncated full 3D CFD models at peak inhalation and peak exhalation. Results showed that at no time or simulation was the difference in mass flux and/or pressure at a given location between uncoupled and coupled models was greater than 2.43%. The flow characteristics at prime locations for the coupled models showed good agreement to uncoupled models. Remarkably, due to reuse of the Krylov subspace, the cost of the ODE coupling is not much greater than uncoupled full 3D-CFD computations with simple prescribed pressure values at the outlets.
Costigan, Keeley Rochelle; Dubey, Manvendra Krishna
2015-07-10
Atmospheric models are compared in collaboration with LANL and the University of Michigan to understand emissions and the condition of the atmosphere from a model perspective.
MULTISCALE MATHEMATICS FOR BIOMASS CONVERSION TO RENEWABLE HYDROGEN
Vlachos, Dionisios; Plechac, Petr; Katsoulakis, Markos
2013-09-05
The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomass transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.
Kim, G. H.; Smith, K.
2009-05-01
Addresses battery requirements for electric vehicles using a model that evaluates physical-chemical processes in lithium-ion batteries, from atomic variations to vehicle interface controls.
Multi-Scale Multi-Dimensional Li-Ion Battery Model for Better Design and Management (Presentation)
Kim, G.-H.; Smith, K.
2008-10-01
The developed model used is to provide a better understanding and help answer engineering questions about improving the design, operational strategy, management, and safety of cells.
Multilingual interfaces for parallel coupling in multiphysics and multiscale systems.
Ong, E. T.; Larson, J. W.; Norris, B.; Jacob, R. L.; Tobis, M.; Steder, M.; Mathematics and Computer Science; Univ. of Wisconsin; Australian National Univ.; Univ. of Chicago
2007-01-01
Multiphysics and multiscale simulation systems are emerging as a new grand challenge in computational science, largely because of increased computing power provided by the distributed-memory parallel programming model on commodity clusters. These systems often present a parallel coupling problem in their intercomponent data exchanges. Another potential problem in these coupled systems is language interoperability between their various constituent codes. In anticipation of combined parallel coupling/language interoperability challenges, we have created a set of interlanguage bindings for a successful parallel coupling library, the Model Coupling Toolkit. We describe the method used for automatically generating the bindings using the Babel language interoperability tool, and illustrate with short examples how MCT can be used from the C++ and Python languages. We report preliminary performance reports for the MCT interpolation benchmark. We conclude with a discussion of the significance of this work to the rapid prototyping of large parallel coupled systems.
Freed, Alan D.; Einstein, Daniel R.
2011-04-14
An isotropic constitutive model for the parenchyma of lung has been derived from the theory of hypo-elasticity. The intent is to use it to represent the mechanical response of this soft tissue in sophisticated, computational, fluid-dynamic models of the lung. This demands that the continuum model be accurate, yet simple and effcient. An objective algorithm for its numeric integration is provided. The response of the model is determined for several boundary-value problems whose experiments are used for material characterization. The effective elastic, bulk, and shear moduli, and Poissons ratio, as tangent functions, are also derived. The model is characterized against published experimental data for lung. A bridge between this continuum model and a dodecahedral model of alveolar geometry is investigated, with preliminary findings being reported.
Method and apparatus for modeling interactions
Xavier, Patrick G.
2002-01-01
The present invention provides a method and apparatus for modeling interactions that overcomes drawbacks. The method of the present invention comprises representing two bodies undergoing translations by two swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention is more robust and allows faster modeling than previous methods.
Multi-scale First-Principles Modeling of Three-Phase System of Polymer Electrolyte Membrane Fuel Cel
Brunello, Giuseppe; Choi, Ji; Harvey, David; Jang, Seung
2012-07-01
The three-phase system consisting of Nafion, graphite and platinum in the presence of water is studied using molecule dynamics simulation. The force fields describing the molecular interaction between the components in the system are developed to reproduce the energies calculated from density functional theory modeling. The configuration of such complicated three-phase system is predicted through MD simulations. The nanophase-segregation and transport properties are investigated from the equilibrium state. The coverage of the electrolyte on the platinum surface and the dissolution of oxygen are analyzed.
Zhang, Xuesong; Sahajpal, Ritvik; Manowitz, D.; Zhao, Kaiguang; LeDuc, Stephen D.; Xu, Min; Xiong, Wei; Zhang, Aiping; Izaurralde, Roberto C.; Thomson, Allison M.; West, Tristram O.; Post, W. M.
2014-05-01
The development of effective measures to stabilize atmospheric CO2 concentration and mitigate negative impacts of climate change requires accurate quantification of the spatial variation and magnitude of the terrestrial carbon (C) flux. However, the spatial pattern and strength of terrestrial C sinks and sources remain uncertain. In this study, we designed a spatially-explicit agroecosystem modeling system by integrating the Environmental Policy Integrated Climate (EPIC) model with multiple sources of geospatial and surveyed datasets (including crop type map, elevation, climate forcing, fertilizer application, tillage type and distribution, and crop planting and harvesting date), and applied it to examine the sensitivity of cropland C flux simulations to two widely used soil databases (i.e. State Soil Geographic-STATSGO of a scale of 1:250,000 and Soil Survey Geographic-SSURGO of a scale of 1:24,000) in Iowa, USA. To efficiently execute numerous EPIC runs resulting from the use of high resolution spatial data (56m), we developed a parallelized version of EPIC. Both STATSGO and SSURGO led to similar simulations of crop yields and Net Ecosystem Production (NEP) estimates at the State level. However, substantial differences were observed at the county and sub-county (grid) levels. In general, the fine resolution SSURGO data outperformed the coarse resolution STATSGO data for county-scale crop-yield simulation, and within STATSGO, the area-weighted approach provided more accurate results. Further analysis showed that spatial distribution and magnitude of simulated NEP were more sensitive to the resolution difference between SSURGO and STATSGO at the county or grid scale. For over 60% of the cropland areas in Iowa, the deviations between STATSGO- and SSURGO-derived NEP were larger than 1MgCha(-1)yr(-1), or about half of the average cropland NEP, highlighting the significant uncertainty in spatial distribution and magnitude of simulated C fluxes resulting from
A Many-Task Parallel Approach for Multiscale Simulations of Subsurface Flow and Reactive Transport
Scheibe, Timothy D.; Yang, Xiaofan; Schuchardt, Karen L.; Agarwal, Khushbu; Chase, Jared M.; Palmer, Bruce J.; Tartakovsky, Alexandre M.
2014-12-16
Continuum-scale models have long been used to study subsurface flow, transport, and reactions but lack the ability to resolve processes that are governed by pore-scale mixing. Recently, pore-scale models, which explicitly resolve individual pores and soil grains, have been developed to more accurately model pore-scale phenomena, particularly reaction processes that are controlled by local mixing. However, pore-scale models are prohibitively expensive for modeling application-scale domains. This motivates the use of a hybrid multiscale approach in which continuum- and pore-scale codes are coupled either hierarchically or concurrently within an overall simulation domain (time and space). This approach is naturally suited to an adaptive, loosely-coupled many-task methodology with three potential levels of concurrency. Each individual code (pore- and continuum-scale) can be implemented in parallel; multiple semi-independent instances of the pore-scale code are required at each time step providing a second level of concurrency; and Monte Carlo simulations of the overall system to represent uncertainty in material property distributions provide a third level of concurrency. We have developed a hybrid multiscale model of a mixing-controlled reaction in a porous medium wherein the reaction occurs only over a limited portion of the domain. Loose, minimally-invasive coupling of pre-existing parallel continuum- and pore-scale codes has been accomplished by an adaptive script-based workflow implemented in the Swift workflow system. We describe here the methods used to create the model system, adaptively control multiple coupled instances of pore- and continuum-scale simulations, and maximize the scalability of the overall system. We present results of numerical experiments conducted on NERSC supercomputing systems; our results demonstrate that loose many-task coupling provides a scalable solution for multiscale subsurface simulations with minimal overhead.
Multiscale simulations of blood-flow: from a platelet to an artery |
U.S. Department of Energy (DOE) all webpages (Extended Search)
Argonne Leadership Computing Facility simulations of blood-flow: from a platelet to an artery Authors: L. Grinberg, M. Deng, H. Lei, J. A. Insley, G. E. Karniadakis We review our recent advances on multiscale modeling of blood flow including blood rheology. We focus on the objectives, methods, computational complexity and overall methodology for simulations at the level of glycocalyx (< 1 μm), blood cells (2--8μm) and up to larger arteries (O(cm)). The main findings of our research and
Hybrid multiscale simulation of a mixing-controlled reaction
Scheibe, Timothy D.; Schuchardt, Karen L.; Agarwal, Khushbu; Chase, Jared M.; Yang, Xiaofan; Palmer, Bruce J.; Tartakovsky, Alexandre M.; Elsethagen, Todd O.; Redden, George D.
2015-09-01
Continuum-scale models have been used to study subsurface flow, transport, and reactions for many years but lack the capability to resolve fine-grained processes. Recently, pore-scale models, which operate at scales of individual soil grains, have been developed to more accurately model and study pore-scale phenomena, such as mineral precipitation and dissolution reactions, microbially-mediated surface reactions, and other complex processes. However, these highly-resolved models are prohibitively expensive for modeling domains of sizes relevant to practical problems. To broaden the utility of pore-scale models for larger domains, we developed a hybrid multiscale model that initially simulates the full domain at the continuum scale and applies a pore-scale model only to areas of high reactivity. Since the location and number of pore-scale model regions in the model varies as the reactions proceed, an adaptive script defines the number and location of pore regions within each continuum iteration and initializes pore-scale simulations from macroscale information. Another script communicates information from the pore-scale simulation results back to the continuum scale. These components provide loose coupling between the pore- and continuum-scale codes into a single hybrid multiscale model implemented within the SWIFT workflow environment. In this paper, we consider an irreversible homogenous bimolecular reaction (two solutes reacting to form a third solute) in a 2D test problem. This paper is focused on the approach used for multiscale coupling between pore- and continuum-scale models, application to a realistic test problem, and implications of the results for predictive simulation of mixing-controlled reactions in porous media. Our results and analysis demonstrate that loose coupling provides a feasible, efficient and scalable approach for multiscale subsurface simulations.
Spectral method for a kinetic swarming model
Gamba, Irene M.; Haack, Jeffrey R.; Motsch, Sebastien
2015-04-28
Here we present the first numerical method for a kinetic description of the Vicsek swarming model. The kinetic model poses a unique challenge, as there is a distribution dependent collision invariant to satisfy when computing the interaction term. We use a spectral representation linked with a discrete constrained optimization to compute these interactions. To test the numerical scheme we investigate the kinetic model at different scales and compare the solution with the microscopic and macroscopic descriptions of the Vicsek model. Lastly, we observe that the kinetic model captures key features such as vortex formation and traveling waves.
Other: Multiscale Simulation of Blood Flow in Brain Arteries...
Office of Scientific and Technical Information (OSTI)
Multiscale Simulation of Blood Flow in Brain Arteries with an Aneurysm Citation Details Title: Multiscale Simulation of Blood Flow in Brain Arteries with an Aneurysm
Multiscale Blood Flow Simulations | Argonne Leadership Computing Facility
U.S. Department of Energy (DOE) all webpages (Extended Search)
a continuum model Brain blood flow simulation with NekTar; a continuum model. Leopold Grinberg, George Em. Karniadakis, Brown University; Vitali Morozov, Joseph A. Insley, Michael E. Papka, Kalyan Kumaran; Argonne National Laboratory; Dmitry A Fedosov, Forschungszentrum Juelich Multiscale Blood Flow Simulations PI Name: George Karniadakis PI Email: gk@dam.brown.edu Institution: Brown University Allocation Program: INCITE Allocation Hours at ALCF: 71 Million Year: 2013 Research Domain: Biological
"Multiscale Capabilities for Exploring Transport Phenomena in...
Office of Scientific and Technical Information (OSTI)
in Batteries": Ab Initio Calculations on Defective LiFePO4 Citation Details In-Document Search Title: "Multiscale Capabilities for Exploring Transport Phenomena in Batteries": Ab ...
Multiscale Simulations of Human Pathologies | Argonne Leadership...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Inset shows the time evolution of thrombus formation. George Karniadakis, Brown University Multiscale Simulations of Human Pathologies PI Name: George Karniadakis PI Email: ...
Multiscale Simulations of Human Pathologies | Argonne Leadership...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Multiscale Simulations of Human Pathologies PI Name: George Karniadakis PI Email: ... of Sickle Hemoglobin Predicting Human Blood Viscosity in Silico Tightly Coupled ...
Multiscale Simulations of Human Pathologies | Argonne Leadership...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Karniadakis, Paris Perdikaris, and Yue Yu, Brown University; Leopold Grinberg, IBM T. J. Watson Research Center and Brown University Multiscale Simulations of Human Pathologies PI ...
Computational Physics and Methods
U.S. Department of Energy (DOE) all webpages (Extended Search)
2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces
Davtyan, Aram; Dama, James F.; Voth, Gregory A.; Andersen, Hans C.
2015-04-21
Coarse-grained (CG) models of molecular systems, with fewer mechanical degrees of freedom than an all-atom model, are used extensively in chemical physics. It is generally accepted that a coarse-grained model that accurately describes equilibrium structural properties (as a result of having a well constructed CG potential energy function) does not necessarily exhibit appropriate dynamical behavior when simulated using conservative Hamiltonian dynamics for the CG degrees of freedom on the CG potential energy surface. Attempts to develop accurate CG dynamic models usually focus on replacing Hamiltonian motion by stochastic but Markovian dynamics on that surface, such as Langevin or Brownian dynamics. However, depending on the nature of the system and the extent of the coarse-graining, a Markovian dynamics for the CG degrees of freedom may not be appropriate. In this paper, we consider the problem of constructing dynamic CG models within the context of the Multi-Scale Coarse-graining (MS-CG) method of Voth and coworkers. We propose a method of converting a MS-CG model into a dynamic CG model by adding degrees of freedom to it in the form of a small number of fictitious particles that interact with the CG degrees of freedom in simple ways and that are subject to Langevin forces. The dynamic models are members of a class of nonlinear systems interacting with special heat baths that were studied by Zwanzig [J. Stat. Phys. 9, 215 (1973)]. The properties of the fictitious particles can be inferred from analysis of the dynamics of all-atom simulations of the system of interest. This is analogous to the fact that the MS-CG method generates the CG potential from analysis of equilibrium structures observed in all-atom simulation data. The dynamic models generate a non-Markovian dynamics for the CG degrees of freedom, but they can be easily simulated using standard molecular dynamics programs. We present tests of this method on a series of simple examples that demonstrate that
Multiscale characterization and analysis of shapes
Prasad, Lakshman; Rao, Ramana
2002-01-01
An adaptive multiscale method approximates shapes with continuous or uniformly and densely sampled contours, with the purpose of sparsely and nonuniformly discretizing the boundaries of shapes at any prescribed resolution, while at the same time retaining the salient shape features at that resolution. In another aspect, a fundamental geometric filtering scheme using the Constrained Delaunay Triangulation (CDT) of polygonized shapes creates an efficient parsing of shapes into components that have semantic significance dependent only on the shapes' structure and not on their representations per se. A shape skeletonization process generalizes to sparsely discretized shapes, with the additional benefit of prunability to filter out irrelevant and morphologically insignificant features. The skeletal representation of characters of varying thickness and the elimination of insignificant and noisy spurs and branches from the skeleton greatly increases the robustness, reliability and recognition rates of character recognition algorithms.
The Adaptive Multi-scale Simulation Infrastructure
Tobin, William R.
2015-09-01
The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.
Method and apparatus for modeling interactions
Xavier, Patrick G.
2000-08-08
A method and apparatus for modeling interactions between bodies. The method comprises representing two bodies undergoing translations and rotations by two hierarchical swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention can serve as a practical tool in motion planning, CAD systems, simulation systems, safety analysis, and applications that require modeling time-based interactions. A body can be represented in the present invention by a union of convex polygons and convex polyhedra. As used generally herein, polyhedron includes polygon, and polyhedra includes polygons. The body undergoing translation can be represented by a swept body representation, where the swept body representation comprises a hierarchical bounding volume representation whose leaves each contain a representation of the region swept by a section of the body during the translation, and where the union of the regions is a superset of the region swept by the surface of the body during translation. Interactions between two bodies thus represented can be modeled by modeling interactions between the convex hulls of the finite sets of discrete points in the swept body representations.
Multiscale MonteCarlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
MULTISCALE DYNAMICS OF SOLAR MAGNETIC STRUCTURES
Uritsky, Vadim M.; Davila, Joseph M.
2012-03-20
Multiscale topological complexity of the solar magnetic field is among the primary factors controlling energy release in the corona, including associated processes in the photospheric and chromospheric boundaries. We present a new approach for analyzing multiscale behavior of the photospheric magnetic flux underlying these dynamics as depicted by a sequence of high-resolution solar magnetograms. The approach involves two basic processing steps: (1) identification of timing and location of magnetic flux origin and demise events (as defined by DeForest et al.) by tracking spatiotemporal evolution of unipolar and bipolar photospheric regions, and (2) analysis of collective behavior of the detected magnetic events using a generalized version of the Grassberger-Procaccia correlation integral algorithm. The scale-free nature of the developed algorithms makes it possible to characterize the dynamics of the photospheric network across a wide range of distances and relaxation times. Three types of photospheric conditions are considered to test the method: a quiet photosphere, a solar active region (NOAA 10365) in a quiescent non-flaring state, and the same active region during a period of M-class flares. The results obtained show (1) the presence of a topologically complex asymmetrically fragmented magnetic network in the quiet photosphere driven by meso- and supergranulation, (2) the formation of non-potential magnetic structures with complex polarity separation lines inside the active region, and (3) statistical signatures of canceling bipolar magnetic structures coinciding with flaring activity in the active region. Each of these effects can represent an unstable magnetic configuration acting as an energy source for coronal dissipation and heating.
Energy Science and Technology Software Center (OSTI)
2009-08-01
The code to be released is a new addition to the LAMMPS molecular dynamics code. LAMMPS is developed and maintained by Sandia, is publicly available, and is used widely by both natioanl laboratories and academics. The new addition to be released enables LAMMPS to perform molecular dynamics simulations of shock waves using the Multi-scale Shock Simulation Technique (MSST) which we have developed and has been previously published. This technique enables molecular dynamics simulations of shockmore » waves in materials for orders of magnitude longer timescales than the direct, commonly employed approach.« less
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-11-18
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo
2014-04-15
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"
Petzold, Linda R.
2012-10-25
Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.
Method of and apparatus for modeling interactions
Budge, Kent G.
2004-01-13
A method and apparatus for modeling interactions can accurately model tribological and other properties and accommodate topological disruptions. Two portions of a problem space are represented, a first with a Lagrangian mesh and a second with an ALE mesh. The ALE and Lagrangian meshes are constructed so that each node on the surface of the Lagrangian mesh is in a known correspondence with adjacent nodes in the ALE mesh. The interaction can be predicted for a time interval. Material flow within the ALE mesh can accurately model complex interactions such as bifurcation. After prediction, nodes in the ALE mesh in correspondence with nodes on the surface of the Lagrangian mesh can be mapped so that they are once again adjacent to their corresponding Lagrangian mesh nodes. The ALE mesh can then be smoothed to reduce mesh distortion that might reduce the accuracy or efficiency of subsequent prediction steps. The process, from prediction through mapping and smoothing, can be repeated until a terminal condition is reached.
Lim, H.; Hale, L. M.; Zimmerman, J. A.; Battaile, C. C.; Weinberger, C. R.
2015-01-05
In this study, we develop an atomistically informed crystal plasticity finite element (CP-FE) model for body-centered-cubic (BCC) α-Fe that incorporates non-Schmid stress dependent slip with temperature and strain rate effects. Based on recent insights obtained from atomistic simulations, we propose a new constitutive model that combines a generalized non-Schmid yield law with aspects from a line tension (LT) model for describing activation enthalpy required for the motion of dislocation kinks. Atomistic calculations are conducted to quantify the non-Schmid effects while both experimental data and atomistic simulations are used to assess the temperature and strain rate effects. The parameterized constitutive equationmore » is implemented into a BCC CP-FE model to simulate plastic deformation of single and polycrystalline Fe which is compared with experimental data from the literature. This direct comparison demonstrates that the atomistically informed model accurately captures the effects of crystal orientation, temperature and strain rate on the flow behavior of siangle crystal Fe. Furthermore, our proposed CP-FE model exhibits temperature and strain rate dependent flow and yield surfaces in polycrystalline Fe that deviate from conventional CP-FE models based on Schmid's law.« less
Tome, Carlos N; Caro, J A; Lebensohn, R A; Unal, Cetin; Arsenlis, A; Marian, J; Pasamehmetoglu, K
2010-01-01
Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating the phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.
Adomian Decomposition Method for Quark Gluon Plasma Model
Constantinescu, Radu; Ionescu, Carmen; Stoicescu, Mihai
2011-10-03
The paper investigates the possibility of obtaining analytical solutions for the Quark Gluon Plasma model using the Adomian decomposition method.
Yang, Judith C.
2015-01-09
The purpose of this grant is to develop the multi-scale theoretical methods to describe the nanoscale oxidation of metal thin films, as the PI (Yang) extensive previous experience in the experimental elucidation of the initial stages of Cu oxidation by primarily in situ transmission electron microscopy methods. Through the use and development of computational tools at varying length (and time) scales, from atomistic quantum mechanical calculation, force field mesoscale simulations, to large scale Kinetic Monte Carlo (KMC) modeling, the fundamental underpinings of the initial stages of Cu oxidation have been elucidated. The development of computational modeling tools allows for accelerated materials discovery. The theoretical tools developed from this program impact a wide range of technologies that depend on surface reactions, including corrosion, catalysis, and nanomaterials fabrication.
Liu, Dajiang [Ames Laboratory; Evans, James W. [Ames Laboratory
2013-12-01
A realistic molecular-level description of catalytic reactions on single-crystal metal surfaces can be provided by stochastic multisite lattice-gas (msLG) models. This approach has general applicability, although in this report, we will focus on the example of CO-oxidation on the unreconstructed fcc metal (100) or M(100) surfaces of common catalyst metals M = Pd, Rh, Pt and Ir (i.e., avoiding regimes where Pt and Ir reconstruct). These models can capture the thermodynamics and kinetics of adsorbed layers for the individual reactants species, such as CO/M(100) and O/M(100), as well as the interaction and reaction between different reactant species in mixed adlayers, such as (CO + O)/M(100). The msLG models allow population of any of hollow, bridge, and top sites. This enables a more flexible and realistic description of adsorption and adlayer ordering, as well as of reaction configurations and configuration-dependent barriers. Adspecies adsorption and interaction energies, as well as barriers for various processes, constitute key model input. The choice of these energies is guided by experimental observations, as well as by extensive Density Functional Theory analysis. Model behavior is assessed via Kinetic Monte Carlo (KMC) simulation. We also address the simulation challenges and theoretical ramifications associated with very rapid diffusion and local equilibration of reactant adspecies such as CO. These msLG models are applied to describe adsorption, ordering, and temperature programmed desorption (TPD) for individual CO/M(100) and O/M(100) reactant adlayers. In addition, they are also applied to predict mixed (CO + O)/M(100) adlayer structure on the nanoscale, the complete bifurcation diagram for reactive steady-states under continuous flow conditions, temperature programmed reaction (TPR) spectra, and titration reactions for the CO-oxidation reaction. Extensive and reasonably successful comparison of model predictions is made with experimental data. Furthermore
Vikas Tomer; John Renaud
2010-08-31
It is estimated that by using better and improved high temperature structural materials, the power generation efficiency of the power plants can be increased by 15% resulting in significant cost savings. One such promising material system for future high-temperature structural applications in power plants is Silicon Carbide-Silicon Nitride (SiC-Si{sub 3}N{sub 4}) nanoceramic matrix composites. The described research work focuses on multiscale simulation-based design of these SiC-Si{sub 3}N{sub 4} nanoceramic matrix composites. There were two primary objectives of the research: (1) Development of a multiscale simulation tool and corresponding multiscale analyses of the high-temperature creep and fracture resistance properties of the SiC-Si{sub 3}N{sub 4} nanocomposites at nano-, meso- and continuum length- and timescales; and (2) Development of a simulation-based robust design optimization methodology for application to the multiscale simulations to predict the range of the most suitable phase morphologies for the desired high-temperature properties of the SiC-Si{sub 3}N{sub 4} nanocomposites. The multiscale simulation tool is based on a combination of molecular dynamics (MD), cohesive finite element method (CFEM), and continuum level modeling for characterizing time-dependent material deformation behavior. The material simulation tool is incorporated in a variable fidelity model management based design optimization framework. Material modeling includes development of an experimental verification framework. Using material models based on multiscaling, it was found using molecular simulations that clustering of the SiC particles near Si{sub 3}N{sub 4} grain boundaries leads to significant nanocomposite strengthening and significant rise in fracture resistance. It was found that a control of grain boundary thicknesses by dispersing non-stoichiometric carbide or nitride phases can lead to reduction in strength however significant rise in fracture strength. The
Collaboratory for Multiscale Chemical Science (CMCS)
Allison, Thomas C
2012-07-03
This document provides details of the contributions made by NIST to the Collaboratory for Multiscale Chemical Science (CMCS) project. In particular, efforts related to the provision of data (and software in support of that data) relevant to the combustion pilot project are described.
Practical Use of Computationally Frugal Model Analysis Methods
Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen
2015-03-21
Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugal methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts
Practical Use of Computationally Frugal Model Analysis Methods
Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen
2015-03-21
Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less
Geoelectrical Measurement of Multi-Scale Mass Transfer Parameters
Day-Lewis, Frederick; Singha, Kamini; Haggerty, Roy; Johnson, Tim; Binley, Andrew; Lane, John
2014-01-16
-part research plan involving (1) development of computer codes and techniques to estimate mass-transfer parameters from time-lapse electrical data; (2) bench-scale experiments on synthetic materials and materials from cores from the Hanford 300 Area; and (3) field demonstration experiments at the DOE’s Hanford 300 Area. In a synergistic add-on to our workplan, we analyzed data from field experiments performed at the DOE Naturita Site under a separate DOE SBR grant, on which PI Day-Lewis served as co-PI. Techniques developed for application to Hanford datasets also were applied to data from Naturita. 1. Introduction The Department of Energy (DOE) faces enormous scientific and engineering challenges associated with the remediation of legacy contamination at former nuclear weapons production facilities. Selection, design and optimization of appropriate site remedies (e.g., pump-and-treat, biostimulation, or monitored natural attenuation) requires reliable predictive models of radionuclide fate and transport; however, our current modeling capabilities are limited by an incomplete understanding of multi-scale mass transfer—its rates, scales, and the heterogeneity of controlling parameters. At many DOE sites, long “tailing” behavior, concentration rebound, and slower-than-expected cleanup are observed; these observations are all consistent with multi-scale mass transfer [Haggerty and Gorelick, 1995; Haggerty et al., 2000; 2004], which renders pump-and-treat remediation and biotransformation inefficient and slow [Haggerty and Gorelick, 1994; Harvey et al., 1994; Wilson, 1997]. Despite the importance of mass transfer, there are significant uncertainties associated with controlling parameters, and the prevalence of mass transfer remains a point of debate [e.g., Hill et al., 2006; Molz et al., 2006] for lack of experimental methods to verify and measure it in situ or independently of tracer breakthrough. There is a critical need for new field-experimental techniques to
Weather Research and Forecasting Model with the Immersed Boundary Method
Energy Science and Technology Software Center (OSTI)
2012-05-01
The Weather Research and Forecasting (WRF) Model with the immersed boundary method is an extension of the open-source WRF Model available for wwww.wrf-model.org. The new code modifies the gridding procedure and boundary conditions in the WRF model to improve WRF's ability to simutate the atmosphere in environments with steep terrain and additionally at high-resolutions.
Multiscale modeling and characterization for performance and...
Office of Scientific and Technical Information (OSTI)
Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical ...
A New Computational Paradigm in Multiscale Simulations: Application to
U.S. Department of Energy (DOE) all webpages (Extended Search)
Brain Blood Flow | Argonne Leadership Computing Facility A New Computational Paradigm in Multiscale Simulations: Application to Brain Blood Flow Authors: Grinberg, L., Insley, J.A., Morozov, V., Papka, M.E., Karniadakis, G.E., Fedosov, D., Kumaran, K. Interfacing atomistic-based with continuum-based simulation codes is now required in many multiscale physical and biological systems. We present the computational advances that have enabled the first multiscale simulation on 190,740 processors
Systems and methods for modeling and analyzing networks
Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W
2013-10-29
The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.
A New Computational Paradigm in Multiscale Simulations: Application...
U.S. Department of Energy (DOE) all webpages (Extended Search)
A New Computational Paradigm in Multiscale Simulations: Application to Brain Blood Flow ... We study blood flow in a patient-specific cerebrovasculature with a brain aneurysm, and ...
Multi-Scale Initial Conditions For Cosmological Simulations ...
Office of Scientific and Technical Information (OSTI)
Journal Article: Multi-Scale Initial Conditions For Cosmological Simulations Citation ... OSTI Identifier: 1028674 Report Number(s): SLAC-PUB-14463 Journal ID: ISSN 0035-8711; ...
Theory & Modeling | Argonne National Laboratory
U.S. Department of Energy (DOE) all webpages (Extended Search)
interactions with nanostructures Methods and software development, including multiscale approaches to assembly Group Lead Stephen Gray People Maria K. Y. Chan Larry Curtiss...
An adaptive wavelet stochastic collocation method for irregular...
Office of Scientific and Technical Information (OSTI)
adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. ...
Multiscale Computation. Needs and Opportunities for BER Science
Scheibe, Timothy D.; Smith, Jeremy C.
2015-01-01
The Environmental Molecular Sciences Laboratory (EMSL), a scientific user facility managed by Pacific Northwest National Laboratory for the U.S. Department of Energy, Office of Biological and Environmental Research (BER), conducted a one-day workshop on August 26, 2014 on the topic of “Multiscale Computation: Needs and Opportunities for BER Science.” Twenty invited participants, from various computational disciplines within the BER program research areas, were charged with the following objectives; Identify BER-relevant models and their potential cross-scale linkages that could be exploited to better connect molecular-scale research to BER research at larger scales and; Identify critical science directions that will motivate EMSL decisions regarding future computational (hardware and software) architectures.
Optical Measurement Methods used in Calibration and Validation of Modeled
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Injection Spray Characteristics | Department of Energy Optical Measurement Methods used in Calibration and Validation of Modeled Injection Spray Characteristics Optical Measurement Methods used in Calibration and Validation of Modeled Injection Spray Characteristics Poster presented at the 16th Directions in Engine-Efficiency and Emissions Research (DEER) Conference in Detroit, MI, September 27-30, 2010. p-07_klyza.pdf (363.89 KB) More Documents & Publications Effect of Ambient Pressure
Modeling Microbes: New Methods for Integrated Metabolic and Regulatory
U.S. Department of Energy (DOE) all webpages (Extended Search)
Network Reconstruction | Argonne National Laboratory Modeling Microbes: New Methods for Integrated Metabolic and Regulatory Network Reconstruction September 8, 2016 10:30AM to 11:30PM Presenter Jose Faria Location Building 240, Room 4301 Type Seminar Series MCS Seminar Abstract: The reconstruction of genome-scale metabolic models (GEMs) from genome functional annotations is, nowadays, a routine practice in systems biology research. The models have been successfully used to predict organisms'
Self-Consistent Multiscale Theory of Internal Wave, Mean-Flow Interactions
Holm, D.D.; Aceves, A.; Allen, J.S.; Alber, M.; Camassa, R.; Cendra, H.; Chen, S.; Duan, J.; Fabijonas, B.; Foias, C.; Fringer, O.; Gent, P.R.; Jordan, R.; Kouranbaeva, S.; Kovacic, G.; Levermore, C.D.; Lythe, G.; Lifschitz, A.; Marsden, J.E.; Margolin, L.; Newberger, P.; Olson, E.; Ratiu, T.; Shkoller, S.; Timofeyev, I.; Titi, E.S.; Wynn, S.
1999-06-03
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The research reported here produced new effective ways to solve multiscale problems in nonlinear fluid dynamics, such as turbulent flow and global ocean circulation. This was accomplished by first developing new methods for averaging over random or rapidly varying phases in nonlinear systems at multiple scales. We then used these methods to derive new equations for analyzing the mean behavior of fluctuation processes coupled self consistently to nonlinear fluid dynamics. This project extends a technology base relevant to a variety of multiscale problems in fluid dynamics of interest to the Laboratory and applies this technology to those problems. The project's theoretical and mathematical developments also help advance our understanding of the scientific principles underlying the control of complex behavior in fluid dynamical systems with strong spatial and temporal internal variability.
Byrne, Jason P.; Morgan, Huw; Habbal, Shadia R.; Gallagher, Peter T.
2012-06-20
Studying coronal mass ejections (CMEs) in coronagraph data can be challenging due to their diffuse structure and transient nature, and user-specific biases may be introduced through visual inspection of the images. The large amount of data available from the Solar and Heliospheric Observatory (SOHO), Solar TErrestrial RElations Observatory (STEREO), and future coronagraph missions also makes manual cataloging of CMEs tedious, and so a robust method of detection and analysis is required. This has led to the development of automated CME detection and cataloging packages such as CACTus, SEEDS, and ARTEMIS. Here, we present the development of a new CORIMP (coronal image processing) CME detection and tracking technique that overcomes many of the drawbacks of current catalogs. It works by first employing the dynamic CME separation technique outlined in a companion paper, and then characterizing CME structure via a multiscale edge-detection algorithm. The detections are chained through time to determine the CME kinematics and morphological changes as it propagates across the plane of sky. The effectiveness of the method is demonstrated by its application to a selection of SOHO/LASCO and STEREO/SECCHI images, as well as to synthetic coronagraph images created from a model corona with a variety of CMEs. The algorithms described in this article are being applied to the whole LASCO and SECCHI data sets, and a catalog of results will soon be available to the public.
Probability of detection models for eddy current NDE methods
Rajesh, S.N.
1993-04-30
The development of probability of detection (POD) models for a variety of nondestructive evaluation (NDE) methods is motivated by a desire to quantify the variability introduced during the process of testing. Sources of variability involved in eddy current methods of NDE include those caused by variations in liftoff, material properties, probe canting angle, scan format, surface roughness and measurement noise. This thesis presents a comprehensive POD model for eddy current NDE. Eddy current methods of nondestructive testing are used widely in industry to inspect a variety of nonferromagnetic and ferromagnetic materials. The development of a comprehensive POD model is therefore of significant importance. The model incorporates several sources of variability characterized by a multivariate Gaussian distribution and employs finite element analysis to predict the signal distribution. The method of mixtures is then used for estimating optimal threshold values. The research demonstrates the use of a finite element model within a probabilistic framework to the spread in the measured signal for eddy current nondestructive methods. Using the signal distributions for various flaw sizes the POD curves for varying defect parameters have been computed. In contrast to experimental POD models, the cost of generating such curves is very low and complex defect shapes can be handled very easily. The results are also operator independent.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Material point method modeling in oil and gas reservoirs
Vanderheyden, William Brian; Zhang, Duan
2016-06-28
A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.
Localized Scale Coupling and New Educational Paradigms in Multiscale Mathematics and Science
LEAL, L. GARY
2013-06-30
One of the most challenging multi-scale simulation problems in the area of multi-phase materials is to develop effective computational techniques for the prediction of coalescence and related phenomena involving rupture of a thin liquid film due to the onset of instability driven by van der Waals or other micro-scale attractive forces. Accurate modeling of this process is critical to prediction of the outcome of milling processes for immiscible polymer blends, one of the most important routes to new advanced polymeric materials. In typical situations, the blend evolves into an ?emulsion? of dispersed phase drops in a continuous matrix fluid. Coalescence is then a critical factor in determining the size distribution of the dispersed phase, but is extremely difficult to predict from first principles. The thin film separating two drops may only achieve rupture at dimensions of approximately 10 nm while the drop sizes are 0(10 ?m). It is essential to achieve very accurate solutions for the flow and for the interface shape at both the macroscale of the full drops, and within the thin film (where the destabilizing disjoining pressure due to van der Waals forces is proportional approximately to the inverse third power of the local film thickness, h-3). Furthermore, the fluids of interest are polymeric (through Newtonian) and the classical continuum description begins to fail as the film thins ? requiring incorporation of molecular effects, such as a hybrid code that incorporates a version of coarse grain molecular dynamics within the thin film coupled with a classical continuum description elsewhere in the flow domain. Finally, the presence of surface active additions, either surfactants (in the form of di-block copolymers) or surface-functionalized micro- or nano-scale particles, adds an additional level of complexity, requiring development of a distinct numerical method to predict the nonuniform concentration gradients of these additives that are responsible for
High-Order/Low-Order methods for ocean modeling
Newman, Christopher; Womeldorff, Geoff; Chacón, Luis; Knoll, Dana A.
2015-06-01
We examine a High Order/Low Order (HOLO) approach for a z-level ocean model and show that the traditional semi-implicit and split-explicit methods, as well as a recent preconditioning strategy, can easily be cast in the framework of HOLO methods. The HOLO formulation admits an implicit-explicit method that is algorithmically scalable and second-order accurate, allowing timesteps much larger than the barotropic time scale. We demonstrate how HOLO approaches, in particular the implicit-explicit method, can provide a solid route for ocean simulation to heterogeneous computing and exascale environments.
Curve fitting methods for solar radiation data modeling
Karim, Samsul Ariffin Abdul E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder E-mail: balbir@petronas.com.my
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
A meshless method for modeling convective heat transfer
Carrington, David B
2010-01-01
A meshless method is used in a projection-based approach to solve the primitive equations for fluid flow with heat transfer. The method is easy to implement in a MATLAB format. Radial basis functions are used to solve two benchmark test cases: natural convection in a square enclosure and flow with forced convection over a backward facing step. The results are compared with two popular and widely used commercial codes: COMSOL, a finite element model, and FLUENT, a finite volume-based model.
Spectral characteristics of background error covariance and multiscale data assimilation
Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; Vogelmann, Andrew M.
2016-05-17
The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less
Multilevel method for modeling large-scale networks.
Safro, I. M.
2012-02-24
Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from
Li, Zhijin; Cheng, Xiaoping; Gustafson, William I.; Vogelmann, Andrew M.
2016-05-17
The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less
(Environmental and geophysical modeling, fracture mechanics, and boundary element methods)
Gray, L.J.
1990-11-09
Technical discussions at the various sites visited centered on application of boundary integral methods for environmental modeling, seismic analysis, and computational fracture mechanics in composite and smart'' materials. The traveler also attended the International Association for Boundary Element Methods Conference at Rome, Italy. While many aspects of boundary element theory and applications were discussed in the papers, the dominant topic was the analysis and application of hypersingular equations. This has been the focus of recent work by the author, and thus the conference was highly relevant to research at ORNL.
Bayesian methods for characterizing unknown parameters of material models
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
2016-02-04
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
Arctic sea ice modeling with the material-point method.
Peterson, Kara J.; Bochev, Pavel Blagoveston
2010-04-01
Arctic sea ice plays an important role in global climate by reflecting solar radiation and insulating the ocean from the atmosphere. Due to feedback effects, the Arctic sea ice cover is changing rapidly. To accurately model this change, high-resolution calculations must incorporate: (1) annual cycle of growth and melt due to radiative forcing; (2) mechanical deformation due to surface winds, ocean currents and Coriolis forces; and (3) localized effects of leads and ridges. We have demonstrated a new mathematical algorithm for solving the sea ice governing equations using the material-point method with an elastic-decohesive constitutive model. An initial comparison with the LANL CICE code indicates that the ice edge is sharper using Materials-Point Method (MPM), but that many of the overall features are similar.
A New Method of Comparing Forcing Agents in Climate Models
Kravitz, Benjamin S.; MacMartin, Douglas; Rasch, Philip J.; Jarvis, Andrew
2015-10-14
We describe a new method of comparing different climate forcing agents (e.g., CO2, CH4, and solar irradiance) that avoids many of the ambiguities introduced by temperature-related climate feedbacks. This is achieved by introducing an explicit feedback loop external to the climate model that adjusts one forcing agent to balance another while keeping global mean surface temperature constant. Compared to current approaches, this method has two main advantages: (i) the need to define radiative forcing is bypassed and (ii) by maintaining roughly constant global mean temperature, the effects of state dependence on internal feedback strengths are minimized. We demonstrate this approach for several different forcing agents and derive the relationships between these forcing agents in two climate models; comparisons between forcing agents are highly linear in concordance with predicted functional forms. Transitivity of the relationships between the forcing agents appears to hold within a wide range of forcing. The relationships between the forcing agents obtained from this method are consistent across both models but differ from relationships that would be obtained from calculations of radiative forcing, highlighting the importance of controlling for surface temperature feedback effects when separating radiative forcing and climate response.
Editorial: Mathematical Methods and Modeling in Machine Fault Diagnosis
Yan, Ruqiang; Chen, Xuefeng; Li, Weihua; Sheng, Shuangwen
2014-12-18
Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issuemore » is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.« less
Editorial: Mathematical Methods and Modeling in Machine Fault Diagnosis
Yan, Ruqiang; Chen, Xuefeng; Li, Weihua; Sheng, Shuangwen
2014-12-18
Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issue is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.
Progress in Fast, Accurate Multi-scale Climate Simulations
Collins, William D; Johansen, Hans; Evans, Katherine J; Woodward, Carol S.; Caldwell, Peter
2015-01-01
We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
An Efficient Implementation of Multiscale Simulation Software PNP-cDFT
Meng, Da; Lin, Guang; Sushko, Maria L.
2012-07-23
An efficient implementation of PNP-cDFT, a multiscale method for computing the chemical potentials of charged species is designed and evaluated. Spatial decomposition of the multi particle system is employed in the parallelization of classical density functional theory (cDFT) algorithm. Furthermore, a truncated strategy is used to reduce the computational complexity of cDFT algorithm. The simulation results show that the parallel implementation has close to linear scalability in parallel computing environments for both 1D and 3D systems. It also shows that the truncated versions of cDFT improve the efficiency of the methods substantially.
Thermodynamic Development of Corrosion Rate Modeling in Iron...
Office of Scientific and Technical Information (OSTI)
... multiscale neutronics research which was performed ... deterministic transport method was not available, this ... transient of a flow reversal and is qualitative in nature. ...
Bridging the PSI Knowledge Gap: A Multi-Scale Approach
Wirth, Brian D
2015-01-08
Plasma-surface interactions (PSI) pose an immense scientific hurdle in magnetic confinement fusion and our present understanding of PSI in confinement environments is highly inadequate; indeed, a recent Fusion Energy Sciences Advisory Committee report found that 4 out of the 5 top five fusion knowledge gaps were related to PSI. The time is appropriate to develop a concentrated and synergistic science effort that would expand, exploit and integrate the wealth of laboratory ion-beam and plasma research, as well as exciting new computational tools, towards the goal of bridging the PSI knowledge gap. This effort would broadly advance plasma and material sciences, while providing critical knowledge towards progress in fusion PSI. This project involves the development of a Science Center focused on a new approach to PSI science; an approach that both exploits access to state-of-the-art PSI experiments and modeling, as well as confinement devices. The organizing principle is to develop synergistic experimental and modeling tools that treat the truly coupled multi-scale aspect of the PSI issues in confinement devices. This is motivated by the simple observation that while typical lab experiments and models allow independent manipulation of controlling variables, the confinement PSI environment is essentially self-determined with few outside controls. This means that processes that may be treated independently in laboratory experiments, because they involve vastly different physical and time scales, will now affect one another in the confinement environment. Also, lab experiments cannot simultaneously match all exposure conditions found in confinement devices typically forcing a linear extrapolation of lab results. At the same time programmatic limitations prevent confinement experiments alone from answering many key PSI questions. The resolution to this problem is to usefully exploit access to PSI science in lab devices, while retooling our thinking from a linear and de
Next Generation Multi-Scale Quantum Simulation Software for Strongly Correlated Materials
Jarrell, Mark
2014-11-18
The goal of this project was to develop a new formalism for the correlated electron problem, which we call, the Multi Scale Many Body formalism. This report will focus on the work done at the Louisiana State University (LSU) since the mid term report. The LSU group moved from the University of Cincinnati (UC) to LSU in the summer of 2008. In the last full year at UC, only half of the funds were received and it took nearly two years for the funds to be transferred from UC to LSU . This effectively shut down the research at LSU until the transfer was completed in 2011, there were also two no-cost extensions of the grant until August of this year. The grant ended for the other SciDAC partners at Davis and ORNL in 2011. Since the mid term report, the LSU group has published 19 papers [P1-P19] acknowledging this SciDAC, which are listed below. In addition, numerous invited talked acknowledged the SciDAC. Below, we will summarize the work at LSU since the mid-term report and mainly since funding resumed. The projects include the further development of multi-scale methods for correlated systems (1), the study of quantum criticality at finite doping in the Hubbard model (2), the description of a promising new method to study Anderson localization with a million-fold reduction of computational complexity!, the description of other projects (4), and (5) a workshop to close out the project that brought together exascale program developers (Stellar, MPI, OpenMP,...) with applications developers.
Methods for Developing Emissions Scenarios for Integrated Assessment Models
Prinn, Ronald; Webster, Mort
2007-08-20
The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.
Modeling granular phosphor screens by Monte Carlo methods
Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.
2006-12-15
The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd{sub 2}O{sub 2}S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd{sub 2}O{sub 2}S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd{sub 2}O{sub 2}S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)
A robust absorbing layer method for anisotropic seismic wave modeling
Mtivier, L.; Brossier, R.; Labb, S.; Operto, S.; Virieux, J.
2014-12-15
When applied to wave propagation modeling in anisotropic media, Perfectly Matched Layers (PML) exhibit instabilities. Incoming waves are amplified instead of being absorbed. Overcoming this difficulty is crucial as in many seismic imaging applications, accounting accurately for the subsurface anisotropy is mandatory. In this study, we present the SMART layer method as an alternative to PML approach. This method is based on the decomposition of the wavefield into components propagating inward and outward the domain of interest. Only outgoing components are damped. We show that for elastic and acoustic wave propagation in Transverse Isotropic media, the SMART layer is unconditionally dissipative: no amplification of the wavefield is possible. The SMART layers are not perfectly matched, therefore less accurate than conventional PML. However, a reasonable increase of the layer size yields an accuracy similar to PML. Finally, we illustrate that the selective damping strategy on which is based the SMART method can prevent the generation of spurious S-waves by embedding the source in a small zone where only S-waves are damped.
Neural node network and model, and method of teaching same
Parlos, Alexander G.; Atiya, Amir F.; Fernandez, Benito; Tsai, Wei K.; Chong, Kil T.
1995-01-01
The present invention is a fully connected feed forward network that includes at least one hidden layer 16. The hidden layer 16 includes nodes 20 in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device 24 occurring in the feedback path 22 (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit 36 from all the other nodes within the same layer 16. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing.
Neural node network and model, and method of teaching same
Parlos, A.G.; Atiya, A.F.; Fernandez, B.; Tsai, W.K.; Chong, K.T.
1995-12-26
The present invention is a fully connected feed forward network that includes at least one hidden layer. The hidden layer includes nodes in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device occurring in the feedback path (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit from all the other nodes within the same layer. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing. 21 figs.
Multi-Scale Initial Conditions For Cosmological Simulations
Hahn, Oliver; Abel, Tom; /KIPAC, Menlo Park /ZAH, Heidelberg /HITS, Heidelberg
2011-11-04
We discuss a new algorithm to generate multi-scale initial conditions with multiple levels of refinements for cosmological 'zoom-in' simulations. The method uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). The new algorithm achieves rms relative errors of the order of 10{sup -4} for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier-space-induced interference ringing. An optional hybrid multi-grid and Fast Fourier Transform (FFT) based scheme is introduced which has identical Fourier-space behaviour as traditional approaches. Using a suite of re-simulations of a galaxy cluster halo our real-space-based approach is found to reproduce correlation functions, density profiles, key halo properties and subhalo abundances with per cent level accuracy. Finally, we generalize our approach for two-component baryon and dark-matter simulations and demonstrate that the power spectrum evolution is in excellent agreement with linear perturbation theory. For initial baryon density fields, it is suggested to use the local Lagrangian approximation in order to generate a density field for mesh-based codes that is consistent with the Lagrangian perturbation theory instead of the current practice of using the Eulerian linearly scaled densities.
Assessment of Molecular Modeling & Simulation
2002-01-03
This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.
Symmetry Methods for a Geophysical Mass Flow Model
Torrisi, Mariano; Tracina, Rita
2011-09-14
In the framework of symmetry analysis, the class of 2 x 2 PDE systems to whom belong the Savage and Hutter model and the Iverson model is considered. New classes of exact solutions are found.
A Perspective on Coupled Multiscale Simulation and Validation in Nuclear Materials
M. P. Short; D. Gaston; C. R. Stanek; S. Yip
2014-01-01
The field of nuclear materials encompasses numerous opportunities to address and ultimately solve longstanding industrial problems by improving the fundamental understanding of materials through the integration of experiments with multiscale modeling and high-performance simulation. A particularly noteworthy example is an ongoing study of axial power distortions in a nuclear reactor induced by corrosion deposits, known as CRUD (Chalk River unidentified deposits). We describe how progress is being made toward achieving scientific advances and technological solutions on two fronts. Specifically, the study of thermal conductivity of CRUD phases has augmented missing data as well as revealed new mechanisms. Additionally, the development of a multiscale simulation framework shows potential for the validation of a new capability to predict the power distribution of a reactor, in effect direct evidence of technological impact. The material- and system-level challenges identified in the study of CRUD are similar to other well-known vexing problems in nuclear materials, such as irradiation accelerated corrosion, stress corrosion cracking, and void swelling; they all involve connecting materials science fundamentals at the atomistic- and mesoscales to technology challenges at the macroscale.
Trabanino, Rene J; Vaidehi, Nagarajan; Hall, Spencer E; Goddard, William A; Floriano, Wely
2013-02-05
The invention provides computer-implemented methods and apparatus implementing a hierarchical protocol using multiscale molecular dynamics and molecular modeling methods to predict the presence of transmembrane regions in proteins, such as G-Protein Coupled Receptors (GPCR), and protein structural models generated according to the protocol. The protocol features a coarse grain sampling method, such as hydrophobicity analysis, to provide a fast and accurate procedure for predicting transmembrane regions. Methods and apparatus of the invention are useful to screen protein or polynucleotide databases for encoded proteins with transmembrane regions, such as GPCRs.
Piri, Mohammad
2014-03-31
Under this project, a multidisciplinary team of researchers at the University of Wyoming combined state-of-the-art experimental studies, numerical pore- and reservoir-scale modeling, and high performance computing to investigate trapping mechanisms relevant to geologic storage of mixed scCO{sub 2} in deep saline aquifers. The research included investigations in three fundamental areas: (i) the experimental determination of two-‐phase flow relative permeability functions, relative permeability hysteresis, and residual trapping under reservoir conditions for mixed scCO{sub 2}-‐brine systems; (ii) improved understanding of permanent trapping mechanisms; (iii) scientifically correct, fine grid numerical simulations of CO{sub 2} storage in deep saline aquifers taking into account the underlying rock heterogeneity. The specific activities included: (1) Measurement of reservoir-‐conditions drainage and imbibition relative permeabilities, irreducible brine and residual mixed scCO{sub 2} saturations, and relative permeability scanning curves (hysteresis) in rock samples from RSU; (2) Characterization of wettability through measurements of contact angles and interfacial tensions under reservoir conditions; (3) Development of physically-‐based dynamic core-‐scale pore network model; (4) Development of new, improved high-‐ performance modules for the UW-‐team simulator to provide new capabilities to the existing model to include hysteresis in the relative permeability functions, geomechanical deformation and an equilibrium calculation (Both pore-‐ and core-‐scale models were rigorously validated against well-‐characterized core-‐ flooding experiments); and (5) An analysis of long term permanent trapping of mixed scCO{sub 2} through high-‐resolution numerical experiments and analytical solutions. The analysis takes into account formation heterogeneity, capillary trapping, and relative permeability hysteresis.
Multiscale Toxicology- Building the Next Generation Tools for Toxicology
Retterer, S. T.; Holsapple, M. P.
2013-10-31
A Cooperative Research and Development Agreement (CRADA) was established between Battelle Memorial Institute (BMI), Pacific Northwest National Laboratory (PNNL), Oak Ridge National Laboratory (ORNL), Brookhaven National Laboratory (BNL), Lawrence Livermore National Laboratory (LLNL) with the goal of combining the analytical and synthetic strengths of the National Laboratories with BMI's expertise in basic and translational medical research to develop a collaborative pipeline and suite of high throughput and imaging technologies that could be used to provide a more comprehensive understanding of material and drug toxicology in humans. The Multi-Scale Toxicity Initiative (MSTI), consisting of the team members above, was established to coordinate cellular scale, high-throughput in vitro testing, computational modeling and whole animal in vivo toxicology studies between MSTI team members. Development of a common, well-characterized set of materials for testing was identified as a crucial need for the initiative. Two research tracks were established by BMI during the course of the CRADA. The first research track focused on the development of tools and techniques for understanding the toxicity of nanomaterials, specifically inorganic nanoparticles (NPs). ORNL"s work focused primarily on the synthesis, functionalization and characterization of a common set of NPs for dissemination to the participating laboratories. These particles were synthesized to retain the same surface characteristics and size, but to allow visualization using the variety of imaging technologies present across the team. Characterization included the quantitative analysis of physical and chemical properties of the materials as well as the preliminary assessment of NP toxicity using commercially available toxicity screens and emerging optical imaging strategies. Additional efforts examined the development of high-throughput microfluidic and imaging assays for measuring NP uptake, localization, and
Multi-scale framework for the accelerated design of high-efficiency...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Multi-scale framework for the accelerated design of high-efficiency organic photovoltaic cells Organic and hybrid organicinorganic solar cells (OSC) offer a promising low-cost...
Putting ''place'' a multiscale context: perspectices from the sustainability sciences
Wilbanks, Thomas J.
2015-05-04
This paper summarizes a number of perspectives that have emerged from the sustainability sciences in recent decades (NRC, 1999; Kates et al., 2001; NRC, 2006; Kates, 2010) that shed light on the role of place in multi-scale sustainability science and vice-versa, ranging from the importance of the co-production of knowledge for sustainable development to threats to a ''sense of place'' from global environmental and economic changes.
Computational Design of Novel Multiscale Concrete Rheometers | Argonne
U.S. Department of Energy (DOE) all webpages (Extended Search)
Leadership Computing Facility Suspended particles in a rheometer This simulation image shows suspended particles in a rheometer for NIST's proposed mortar SRM. The spheres, which are color coded by their starting location in the rheometer, are suspended in a cement paste with properties derived from NIST's cement paste SRM. Nicos Martys and Steven G. Satterfield, National Institute of Standards and Technology Computational Design of Novel Multiscale Concrete Rheometers PI Name: William
Multi-Scale Characterization of Improved Algae Strains
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Slide 1 DOE Bioenergy Technologies Office (BETO) 2015 Project Peer Review Multi-Scale Characterization of Improved Algae Strains March 23, 2015 Algae Technology Area Review Dr. Taraka Dale Los Alamos National Laboratory This presentation does not contain any proprietary, confidential, or otherwise restricted information LA-UR-15-21927 Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA UNCLASSIFIED Goal Statement The overall goal of this project is to develop a
Lifetime statistics of quantum chaos studied by a multiscale analysis
Di Falco, A.; Krauss, T. F. [School of Physics and Astronomy, University of St. Andrews, North Haugh, St. Andrews, KY16 9SS (United Kingdom); Fratalocchi, A. [PRIMALIGHT, Faculty of Electrical Engineering, Applied Mathematics and Computational Science, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900 (Saudi Arabia)
2012-04-30
In a series of pump and probe experiments, we study the lifetime statistics of a quantum chaotic resonator when the number of open channels is greater than one. Our design embeds a stadium billiard into a two dimensional photonic crystal realized on a silicon-on-insulator substrate. We calculate resonances through a multiscale procedure that combines energy landscape analysis and wavelet transforms. Experimental data is found to follow the universal predictions arising from random matrix theory with an excellent level of agreement.
Method of modeling transmissions for real-time simulation
Hebbale, Kumaraswamy V.
2012-09-25
A transmission modeling system includes an in-gear module that determines an in-gear acceleration when a vehicle is in gear. A shift module determines a shift acceleration based on a clutch torque when the vehicle is shifting between gears. A shaft acceleration determination module determines a shaft acceleration based on at least one of the in-gear acceleration and the shift acceleration.
System and method for modeling and analyzing complex scenarios
Shevitz, Daniel Wolf
2013-04-09
An embodiment of the present invention includes a method for analyzing and solving possibility tree. A possibility tree having a plurality of programmable nodes is constructed and solved with a solver module executed by a processor element. The solver module executes the programming of said nodes, and tracks the state of at least a variable through a branch. When a variable of said branch is out of tolerance with a parameter, the solver disables remaining nodes of the branch and marks the branch as an invalid solution. The valid solutions are then aggregated and displayed as valid tree solutions.
Review of Wind Energy Forecasting Methods for Modeling Ramping Events
Wharton, S; Lundquist, J K; Marjanovic, N; Williams, J L; Rhodes, M; Chow, T K; Maxwell, R
2011-03-28
Tall onshore wind turbines, with hub heights between 80 m and 100 m, can extract large amounts of energy from the atmosphere since they generally encounter higher wind speeds, but they face challenges given the complexity of boundary layer flows. This complexity of the lowest layers of the atmosphere, where wind turbines reside, has made conventional modeling efforts less than ideal. To meet the nation's goal of increasing wind power into the U.S. electrical grid, the accuracy of wind power forecasts must be improved. In this report, the Lawrence Livermore National Laboratory, in collaboration with the University of Colorado at Boulder, University of California at Berkeley, and Colorado School of Mines, evaluates innovative approaches to forecasting sudden changes in wind speed or 'ramping events' at an onshore, multimegawatt wind farm. The forecast simulations are compared to observations of wind speed and direction from tall meteorological towers and a remote-sensing Sound Detection and Ranging (SODAR) instrument. Ramping events, i.e., sudden increases or decreases in wind speed and hence, power generated by a turbine, are especially problematic for wind farm operators. Sudden changes in wind speed or direction can lead to large power generation differences across a wind farm and are very difficult to predict with current forecasting tools. Here, we quantify the ability of three models, mesoscale WRF, WRF-LES, and PF.WRF, which vary in sophistication and required user expertise, to predict three ramping events at a North American wind farm.
Search Method for Real-time Knowledge Discovery Modeled on the...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Search Method for Real-time Knowledge Discovery Modeled on the Human Brain Oak Ridge ... information processing properties of the human brain to computational knowledge discovery. ...
MREG V1.1 : a multi-scale image registration algorithm for SAR applications.
Eichel, Paul H.
2013-08-01
MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962 leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.
MULTI-SCALE MORPHOLOGICAL ANALYSIS OF SDSS DR5 SURVEY USING THE METRIC SPACE TECHNIQUE
Wu Yongfeng; Batuski, David J.; Khalil, Andre
2009-12-20
Following the novel development and adaptation of the Metric Space Technique (MST), a multi-scale morphological analysis of the Sloan Digital Sky Survey (SDSS) Data Release 5 (DR5) was performed. The technique was adapted to perform a space-scale morphological analysis by filtering the galaxy point distributions with a smoothing Gaussian function, thus giving quantitative structural information on all size scales between 5 and 250 Mpc. The analysis was performed on a dozen slices of a volume of space containing many newly measured galaxies from the SDSS DR5 survey. Using the MST, observational data were compared to galaxy samples taken from N-body simulations with current best estimates of cosmological parameters and from random catalogs. By using the maximal ranking method among MST output functions, we also develop a way to quantify the overall similarity of the observed samples with the simulated samples.
Yortsos, Y.C.
2001-05-29
This report is an investigation of various multi-phase and multiscale transport and reaction processes associated with heavy oil recovery. The thrust areas of the project include the following: Internal drives, vapor-liquid flows, combustion and reaction processes, fluid displacements and the effect of instabilities and heterogeneities and the flow of fluids with yield stress. These find respective applications in foamy oils, the evolution of dissolved gas, internal steam drives, the mechanics of concurrent and countercurrent vapor-liquid flows, associated with thermal methods and steam injection, such as SAGD, the in-situ combustion, the upscaling of displacements in heterogeneous media and the flow of foams, Bingham plastics and heavy oils in porous media and the development of wormholes during cold production.
Yortsos, Yanis C.
2001-08-07
This project is an investigation of various multi-phase and multiscale transport and reaction processes associated with heavy oil recovery. The thrust areas of the project include the following: Internal drives, vapor-liquid flows, combustion and reaction processes, fluid displacements and the effect of instabilities and heterogeneities and the flow of fluids with yield stress. These find respective applications in foamy oils, the evolution of dissolved gas, internal steam drives, the mechanics of concurrent and countercurrent vapor-liquid flows, associated with thermal methods and steam injection, such as SAGD, the in-situ combustion, the upscaling of displacements in heterogeneous media and the flow of foams, Bingham plastics and heavy oils in porous media and the development of wormholes during cold production.
Robertson, Eric P; Christiansen, Richard L.
2007-05-29
A method of optically determining a change in magnitude of at least one dimensional characteristic of a sample in response to a selected chamber environment. A magnitude of at least one dimension of the at least one sample may be optically determined subsequent to altering the at least one environmental condition within the chamber. A maximum change in dimension of the at least one sample may be predicted. A dimensional measurement apparatus for indicating a change in at least one dimension of at least one sample. The dimensional measurement apparatus may include a housing with a chamber configured for accommodating pressure changes and an optical perception device for measuring a dimension of at least one sample disposed in the chamber. Methods of simulating injection of a gas into a subterranean formation, injecting gas into a subterranean formation, and producing methane from a coal bed are also disclosed.
Robertson, Eric P; Christiansen, Richard L.
2007-10-23
A method of optically determining a change in magnitude of at least one dimensional characteristic of a sample in response to a selected chamber environment. A magnitude of at least one dimension of the at least one sample may be optically determined subsequent to altering the at least one environmental condition within the chamber. A maximum change in dimension of the at least one sample may be predicted. A dimensional measurement apparatus for indicating a change in at least one dimension of at least one sample. The dimensional measurement apparatus may include a housing with a chamber configured for accommodating pressure changes and an optical perception device for measuring a dimension of at least one sample disposed in the chamber. Methods of simulating injection of a gas into a subterranean formation, injecting gas into a subterranean formation, and producing methane from a coal bed are also disclosed.
An improved multiscale model for dilute turbulent gas particle...
Office of Scientific and Technical Information (OSTI)
as the scale-up in the design of circulating fluidized combustor and coal gasifications. ... Country of Publication: United States Language: English Subject: 01 COAL, LIGNITE, AND ...
Multiscale model of heat dissipation mechanisms during field...
Office of Scientific and Technical Information (OSTI)
OSTI Identifier: 22492702 Resource Type: Journal Article Resource Relation: Journal Name: Applied Physics Letters; Journal Volume: 108; Journal Issue: 3; Other Information: (c) ...
Utilizing CLASIC observations and multiscale models to study...
Office of Scientific and Technical Information (OSTI)
... Close Cite: Bibtex Format Close 0 pages in this document matching the terms "" Search For Terms: Enter terms in the toolbar above to search the full text of this document for ...
Richardson, John G.
2009-11-17
An impedance estimation method includes measuring three or more impedances of an object having a periphery using three or more probes coupled to the periphery. The three or more impedance measurements are made at a first frequency. Three or more additional impedance measurements of the object are made using the three or more probes. The three or more additional impedance measurements are made at a second frequency different from the first frequency. An impedance of the object at a point within the periphery is estimated based on the impedance measurements and the additional impedance measurements.
Radiation Damage in Nuclear Fuel for Advanced Burner Reactors: Modeling and Experimental Validation
Jensen, Niels Gronbech; Asta, Mark; Ozolins, Nigel Browning'Vidvuds; de Walle, Axel van; Wolverton, Christopher
2011-12-29
The consortium has completed its existence and we are here highlighting work and accomplishments. As outlined in the proposal, the objective of the work was to advance the theoretical understanding of advanced nuclear fuel materials (oxides) toward a comprehensive modeling strategy that incorporates the different relevant scales involved in radiation damage in oxide fuels. Approaching this we set out to investigate and develop a set of directions: 1) Fission fragment and ion trajectory studies through advanced molecular dynamics methods that allow for statistical multi-scale simulations. This work also includes an investigation of appropriate interatomic force fields useful for the energetic multi-scale phenomena of high energy collisions; 2) Studies of defect and gas bubble formation through electronic structure and Monte Carlo simulations; and 3) an experimental component for the characterization of materials such that comparisons can be obtained between theory and experiment.
A multilingual programming model for coupled systems.
Ong, E. T.; Larson, J. W.; Norris, B.; Tobis, M.; Steder, M.; Jacob, R. L.; Mathematics and Computer Science; Univ. of Wisconsin; Univ. of Chicago; The Australian National Univ.
2008-01-01
Multiphysics and multiscale simulation systems share a common software requirement-infrastructure to implement data exchanges between their constituent parts-often called the coupling problem. On distributed-memory parallel platforms, the coupling problem is complicated by the need to describe, transfer, and transform distributed data, known as the parallel coupling problem. Parallel coupling is emerging as a new grand challenge in computational science as scientists attempt to build multiscale and multiphysics systems on parallel platforms. An additional coupling problem in these systems is language interoperability between their constituent codes. We have created a multilingual parallel coupling programming model based on a successful open-source parallel coupling library, the Model Coupling Toolkit (MCT). This programming model's capabilities reach beyond MCT's native Fortran implementation to include bindings for the C++ and Python programming languages. We describe the method used to generate the interlanguage bindings. This approach enables an object-based programming model for implementing parallel couplings in non-Fortran coupled systems and in systems with language heterogeneity. We describe the C++ and Python versions of the MCT programming model and provide short examples. We report preliminary performance results for the MCT interpolation benchmark. We describe a major Python application that uses the MCT Python bindings, a Python implementation of the control and coupling infrastructure for the community climate system model. We conclude with a discussion of the significance of this work to productivity computing in multidisciplinary computational science.
Three-Dimensional Lithium-Ion Battery Model (Presentation)
Kim, G. H.; Smith, K.
2008-05-01
Nonuniform battery physics can cause unexpected performance and life degradations in lithium-ion batteries; a three-dimensional cell performance model was developed by integrating an electrode-scale submodel using a multiscale modeling scheme.
Multiscale Simulation of Moist Global Atmospheric Flows
Grabowski, Wojciech W.; Smolarkiewicz, P. K.
2015-04-13
The overarching goal of this award was to include phase changes of the water substance and accompanying latent heating and precipitation processes into the all-scale nonhydrostatic atmospheric dynamics EUlerian/LAGrangian (EULAG) model. The model includes fluid flow solver that is based on either an unabbreviated set of the governing equations (i.e., compressible dynamics) or a simplified set of equations without sound waves (i.e., sound-proof, either anelastic or pseudo-incompressible). The latter set has been used in small-scale dynamics for decades, but its application to the all-scale dynamics (from small-scale to planetary) has never been studied in practical implementations. The highlight of the project is the development of the moist implicit compressible model that can be run by applying time steps, as long as the anelastic model is limited only by the computational stability of the fluid flow and not by the speed of sound waves that limit the stability of explicit compressible models. Applying various versions of the EULAG model within the same numerical framework allows for an unprecedented comparison of solutions obtained with various sets of the governing equations and straightforward evaluation of the impact of various physical parameterizations on the model solutions. The main outcomes of this study are reported in three papers, two published and one currently under review. These papers include comparisons between model solutions for idealized moist problems across the range of scales from small to planetary. These tests include: moist thermals rising in the stable-stratified environment (following Grabowski and Clark, J. Atmos. Sci. 1991) and in the moist-neutral environment (after Bryan and Fritsch, Mon. Wea. Rev. 2002), moist flows over a mesoscale topography (as in Grabowski and Smolarkiewicz, Mon. Wea. Rev. 2002), deep convection in a sheared environment (following Weisman and Klemp, Mon. Wea. Rev. 1982), moist extension of the baroclinic wave on
Multi-scale statistical analysis of coronal solar activity
Gamborino, Diana; del-Castillo-Negrete, Diego; Martinell, Julio J.
2016-07-08
Multi-filter images from the solar corona are used to obtain temperature maps that are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions, we show that the multi-scale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also to be extracted from the analysis.
Synthesis of Numerical Methods for Modeling Wave Energy Converter-Point Absorbers: Preprint
Li, Y.; Yu, Y. H.
2012-05-01
During the past few decades, wave energy has received significant attention among all ocean energy formats. Industry has proposed hundreds of prototypes such as an oscillating water column, a point absorber, an overtopping system, and a bottom-hinged system. In particular, many researchers have focused on modeling the floating-point absorber as the technology to extract wave energy. Several modeling methods have been used such as the analytical method, the boundary-integral equation method, the Navier-Stokes equations method, and the empirical method. However, no standardized method has been decided. To assist the development of wave energy conversion technologies, this report reviews the methods for modeling the floating-point absorber.
Proposed SPAR Modeling Method for Quantifying Time Dependent Station Blackout Cut Sets
John A. Schroeder
2010-06-01
Abstract: The U.S. Nuclear Regulatory Commissions (USNRCs) Standardized Plant Analysis Risk (SPAR) models and industry risk models take similar approaches to analyzing the risk associated with loss of offsite power and station blackout (LOOP/SBO) events at nuclear reactor plants. In both SPAR models and industry models, core damage risk resulting from a LOOP/SBO event is analyzed using a combination of event trees and fault trees that produce cut sets that are, in turn, quantified to obtain a numerical estimate of the resulting core damage risk. A proposed SPAR method for quantifying the time-dependent cut sets is sometimes referred to as a convolution method. The SPAR method reflects assumptions about the timing of emergency diesel failures, the timing of subsequent attempts at emergency diesel repair, and the timing of core damage that may be different than those often used in industry models. This paper describes the proposed SPAR method.
Methods to Register Models and Input/Output Parameters for Integrated Modeling
Droppo, James G.; Whelan, Gene; Tryby, Michael E.; Pelton, Mitchell A.; Taira, Randal Y.; Dorow, Kevin E.
2010-07-10
Significant resources can be required when constructing integrated modeling systems. In a typical application, components (e.g., models and databases) created by different developers are assimilated, requiring the framework’s functionality to bridge the gap between the user’s knowledge of the components being linked. The framework, therefore, needs the capability to assimilate a wide range of model-specific input/output requirements as well as their associated assumptions and constraints. The process of assimilating such disparate components into an integrated modeling framework varies in complexity and difficulty. Several factors influence the relative ease of assimilating components, including, but not limited to, familiarity with the components being assimilated, familiarity with the framework and its tools that support the assimilation process, level of documentation associated with the components and the framework, and design structure of the components and framework. This initial effort reviews different approaches for assimilating models and their model-specific input/output requirements: 1) modifying component models to directly communicate with the framework (i.e., through an Application Programming Interface), 2) developing model-specific external wrappers such that no component model modifications are required, 3) using parsing tools to visually map pre-existing input/output files, and 4) describing and linking models as dynamic link libraries. Most of these approaches are illustrated using the widely distributed modeling system called Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES). The review concludes that each has its strengths and weakness, the factors that determine which approaches work best in a given application.
Sereda, Yuriy V.; Ortoleva, Peter J.
2014-04-07
A closed kinetic equation for the single-particle density of a viscous simple liquid is derived using a variational method for the Liouville equation and a coarse-grained mean-field (CGMF) ansatz. The CGMF ansatz is based on the notion that during the characteristic time of deformation a given particle interacts with many others so that it experiences an average interaction. A trial function for the N-particle probability density is constructed using a multiscale perturbation method and the CGMF ansatz is applied to it. The multiscale perturbation scheme is based on the ratio of the average nearest-neighbor atom distance to the total size of the assembly. A constraint on the initial condition is discovered which guarantees that the kinetic equation is mass-conserving and closed in the single-particle density. The kinetic equation has much of the character of the Vlasov equation except that true viscous, and not Landau, damping is accounted for. The theory captures condensation kinetics and takes much of the character of the Gross-Pitaevskii equation in the weak-gradient short-range force limit.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
Mixed-RKDG Finite Element Methods for the 2-D Hydrodynamic Model for Semiconductor Device Simulation
Chen, Zhangxin; Cockburn, Bernardo; Jerome, Joseph W.; Shu, Chi-Wang
1995-01-01
In this paper we introduce a new method for numerically solving the equations of the hydrodynamic model for semiconductor devices in two space dimensions. The method combines a standard mixed finite element method, used to obtain directly an approximation to the electric field, with the so-called Runge-Kutta Discontinuous Galerkin (RKDG) method, originally devised for numerically solving multi-dimensional hyperbolic systems of conservation laws, which is applied here to the convective part of the equations. Numerical simulations showing the performance of the new method are displayed, and the results compared with those obtained by using Essentially Nonoscillatory (ENO) finite difference schemes. Frommore » the perspective of device modeling, these methods are robust, since they are capable of encompassing broad parameter ranges, including those for which shock formation is possible. The simulations presented here are for Gallium Arsenide at room temperature, but we have tested them much more generally with considerable success.« less
Gao Yajun
2008-08-15
A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.
A Comparison of Multiscale Variations of Decade-long Cloud Fractions...
Office of Scientific and Technical Information (OSTI)
A Comparison of Multiscale Variations of Decade-long Cloud Fractions from Six Different Platforms over the Southern Great Plains in the United States Citation Details In-Document ...
Dorland, William
2014-11-18
The Center for Multiscale Plasma Dynamics (CMPD) was a five-year Fusion Science Center. The University of Maryland (UMD) and UCLA were the host universities. This final technical report describes the physics results from the UMD CMPD.
Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems
Wolf, Elizabeth Skubak; Anderson, David F.
2015-01-21
Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased for a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.
COLLOQUIUM - PLEASE NOTE SPECIAL DATE/TIME: The Magnetospheric MultiScale
U.S. Department of Energy (DOE) all webpages (Extended Search)
Mission Investigation of Magnetic Reconnection | Princeton Plasma Physics Lab February 21, 2013, 10:30am to 12:00pm Colloquia MBG Auditorium COLLOQUIUM - PLEASE NOTE SPECIAL DATE/TIME: The Magnetospheric MultiScale Mission Investigation of Magnetic Reconnection Professor Roy Torbert University of New Hampshire Presentation: File TC21FEB2013_RBTorbert_COMPRESSED.pptx In late fall 2014, NASA will launch the Magnetospheric Multiscale (MMS) mission to study the kinetic physics of magnetic
Multi-scale evaporator architectures for geothermal binary power plants
Sabau, Adrian S; Nejad, Ali; Klett, James William; Bejan, Adrian
2016-01-01
In this paper, novel geometries of heat exchanger architectures are proposed for evaporators that are used in Organic Rankine Cycles. A multi-scale heat exchanger concept was developed by employing successive plenums at several length-scale levels. Flow passages contain features at both macro-scale and micro-scale, which are designed from Constructal Theory principles. Aside from pumping power and overall thermal resistance, several factors were considered in order to fully assess the performance of the new heat exchangers, such as weight of metal structures, surface area per unit volume, and total footprint. Component simulations based on laminar flow correlations for supercritical R134a were used to obtain performance indicators.
Multiscale Multiphysics Developments for Accident Tolerant Fuel Concepts
Gamble, K. A.; Hales, J. D.; Yu, J.; Zhang, Y.; Bai, X.; Andersson, D.; Patra, A.; Wen, W.; Tome, C.; Baskes, M.; Martinez, E.; Stanek, C. R.; Miao, Y.; Ye, B.; Hofman, G. L.; Yacout, A. M.; Liu, W.
2015-09-01
U_{3}Si_{2} and iron-chromium-aluminum (Fe-Cr-Al) alloys are two of many proposed accident-tolerant fuel concepts for the fuel and cladding, respectively. The behavior of these materials under normal operating and accident reactor conditions is not well known. As part of the Department of Energy’s Accident Tolerant Fuel High Impact Problem program significant work has been conducted to investigate the U_{3}Si_{2} and FeCrAl behavior under reactor conditions. This report presents the multiscale and multiphysics effort completed in fiscal year 2015. The report is split into four major categories including Density Functional Theory Developments, Molecular Dynamics Developments, Mesoscale Developments, and Engineering Scale Developments. The work shown here is a compilation of a collaborative effort between Idaho National Laboratory, Los Alamos National Laboratory, Argonne National Laboratory and Anatech Corp.
Three-dimensional Dendritic Needle Network model with application...
Office of Scientific and Technical Information (OSTI)
We present a three-dimensional (3D) extension of a previously proposed multi-scale ... of a given thickness, one can directly extend the DNN approach to 3D modeling. ...
Gardner, Shea Nicole
2007-10-23
A method and system for tailoring treatment regimens to individual patients with diseased cells exhibiting evolution of resistance to such treatments. A mathematical model is provided which models rates of population change of proliferating and quiescent diseased cells using cell kinetics and evolution of resistance of the diseased cells, and pharmacokinetic and pharmacodynamic models. Cell kinetic parameters are obtained from an individual patient and applied to the mathematical model to solve for a plurality of treatment regimens, each having a quantitative efficacy value associated therewith. A treatment regimen may then be selected from the plurlaity of treatment options based on the efficacy value.
A review of existing models and methods to estimate employment effects of pollution control policies
Darwin, R.F.; Nesse, R.J.
1988-02-01
The purpose of this paper is to provide information about existing models and methods used to estimate coal mining employment impacts of pollution control policies. The EPA is currently assessing the consequences of various alternative policies to reduce air pollution. One important potential consequence of these policies is that coal mining employment may decline or shift from low-sulfur to high-sulfur coal producing regions. The EPA requires models that can estimate the magnitude and cost of these employment changes at the local level. This paper contains descriptions and evaluations of three models and methods currently used to estimate the size and cost of coal mining employment changes. The first model reviewed is the Coal and Electric Utilities Model (CEUM), a well established, general purpose model that has been used by the EPA and other groups to simulate air pollution control policies. The second model reviewed is the Advanced Utility Simulation Model (AUSM), which was developed for the EPA specifically to analyze the impacts of air pollution control policies. Finally, the methodology used by Arthur D. Little, Inc. to estimate the costs of alternative air pollution control policies for the Consolidated Coal Company is discussed. These descriptions and evaluations are based on information obtained from published reports and from draft documentation of the models provided by the EPA. 12 refs., 1 fig.
A method for the quantification of model form error associated with physical systems.
Wallen, Samuel P.; Brake, Matthew Robert
2014-03-01
In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.
Time-Varying, Multi-Scale Adaptive System Reliability Analysis of Lifeline Infrastructure Networks
Gearhart, Jared Lee; Kurtz, Nolan Scot
2014-09-01
The majority of current societal and economic needs world-wide are met by the existing networked, civil infrastructure. Because the cost of managing such infrastructure is high and increases with time, risk-informed decision making is essential for those with management responsibilities for these systems. To address such concerns, a methodology that accounts for new information, deterioration, component models, component importance, group importance, network reliability, hierarchical structure organization, and efficiency concerns has been developed. This methodology analyzes the use of new information through the lens of adaptive Importance Sampling for structural reliability problems. Deterioration, multi-scale bridge models, and time-variant component importance are investigated for a specific network. Furthermore, both bridge and pipeline networks are studied for group and component importance, as well as for hierarchical structures in the context of specific networks. Efficiency is the primary driver throughout this study. With this risk-informed approach, those responsible for management can address deteriorating infrastructure networks in an organized manner.
Modeling void formation dynamics in fibrous porous media with the lattice Boltzmann method
Spaid, M.A.A.; Phelan, F.R. Jr.
1997-12-31
A novel technique for simulating multiphase fluid flow in the microstructure of a fiber preform is developed, which has the capability of capturing the dynamics of void formation. The model is based on the lattice Boltzmann (LB) method -- a relatively new numerical technique which has rapidly emerged as a powerful tool for simulating multiphase fluid mechanics. The primary benefit of the lattice Boltzmann method is the ability to robustly model the interface between two immiscible fluids without the need for a complex interface tracking algorithm. In a previous paper, it was demonstrated that the lattice Boltzmann method may be modified to solve the Stokes/Brinkman formulation for flow in heterogeneous porous media. Multiphase infiltration of the fiber microstructure is modeled by combining the Stokes/Brinkman LB method, with the multiphase LB algorithm described by Shan and Chen. Numerical results are presented which compare void formation dynamics as a function of the nominal porosity for a model fiber microstructure. In addition, unsaturated permeabilities obtained from the numerical simulations are compared to saturated results for flow in the model porous microstructure.
Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient
Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more
Computational Modeling | Bioenergy | NREL
U.S. Department of Energy (DOE) all webpages (Extended Search)
Computational Modeling NREL uses computational modeling to increase the efficiency of biomass conversion by rational design using multiscale modeling, applying theoretical approaches, and testing scientific hypotheses. model of enzymes wrapping on cellulose; colorful circular structures entwined through blue strands Cellulosomes are complexes of protein scaffolds and enzymes that are highly effective in decomposing biomass. This is a snapshot of a coarse-grain model of complex cellulosome
Model-based performance monitoring: Review of diagnostic methods and chiller case study
Haves, Phil; Khalsa, Sat Kartar
2000-05-01
The paper commences by reviewing the variety of technical approaches to the problem of detecting and diagnosing faulty operation in order to improve the actual performance of buildings. The review covers manual and automated methods, active testing and passive monitoring, the different classes of models used in fault detection, and methods of diagnosis. The process of model-based fault detection is then illustrated by describing the use of relatively simple empirical models of chiller energy performance to monitor equipment degradation and control problems. The CoolTools(trademark) chiller model identification package is used to fit the DOE-2 chiller model to on-site measurements from a building instrumented with high quality sensors. The need for simple algorithms to reject transient data, detect power surges and identify control problems is discussed, as is the use of energy balance checks to detect sensor problems. The accuracy with which the chiller model can be expected! to predict performance is assessed from the goodness of fit obtained and the implications for fault detection sensitivity and sensor accuracy requirements are discussed. A case study is described in which the model was applied retroactively to high-quality data collected in a San Francisco office building as part of a related project (Piette et al. 1999).
Control method and system for hydraulic machines employing a dynamic joint motion model
Danko, George
2011-11-22
A control method and system for controlling a hydraulically actuated mechanical arm to perform a task, the mechanical arm optionally being a hydraulically actuated excavator arm. The method can include determining a dynamic model of the motion of the hydraulic arm for each hydraulic arm link by relating the input signal vector for each respective link to the output signal vector for the same link. Also the method can include determining an error signal for each link as the weighted sum of the differences between a measured position and a reference position and between the time derivatives of the measured position and the time derivatives of the reference position for each respective link. The weights used in the determination of the error signal can be determined from the constant coefficients of the dynamic model. The error signal can be applied in a closed negative feedback control loop to diminish or eliminate the error signal for each respective link.
Wan, Hui; Rasch, Philip J.; Zhang, Kai; Qian, Yun; Yan, Huiping; Zhao, Chun
2014-09-08
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.
In silico method for modelling metabolism and gene product expression at genome scale
Lerman, Joshua A.; Hyduke, Daniel R.; Latif, Haythem; Portnoy, Vasiliy A.; Lewis, Nathan E.; Orth, Jeffrey D.; Rutledge, Alexandra C.; Smith, Richard D.; Adkins, Joshua N.; Zengler, Karsten; Palsson, Bernard O.
2012-07-03
Transcription and translation use raw materials and energy generated metabolically to create the macromolecular machinery responsible for all cellular functions, including metabolism. A biochemically accurate model of molecular biology and metabolism will facilitate comprehensive and quantitative computations of an organism's molecular constitution as a function of genetic and environmental parameters. Here we formulate a model of metabolism and macromolecular expression. Prototyping it using the simple microorganism Thermotoga maritima, we show our model accurately simulates variations in cellular composition and gene expression. Moreover, through in silico comparative transcriptomics, the model allows the discovery of new regulons and improving the genome and transcription unit annotations. Our method presents a framework for investigating molecular biology and cellular physiology in silico and may allow quantitative interpretation of multi-omics data sets in the context of an integrated biochemical description of an organism.
Wang, Shaobu; Lu, Shuai; Zhou, Ning; Lin, Guang; Elizondo, Marcelo A.; Pai, M. A.
2014-09-04
In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal basis of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.
Boscá, A.; Pedrós, J.; Martínez, J.; Calle, F.
2015-01-28
Due to its intrinsic high mobility, graphene has proved to be a suitable material for high-speed electronics, where graphene field-effect transistor (GFET) has shown excellent properties. In this work, we present a method for extracting relevant electrical parameters from GFET devices using a simple electrical characterization and a model fitting. With experimental data from the device output characteristics, the method allows to calculate parameters such as the mobility, the contact resistance, and the fixed charge. Differentiated electron and hole mobilities and direct connection with intrinsic material properties are some of the key aspects of this method. Moreover, the method output values can be correlated with several issues during key fabrication steps such as the graphene growth and transfer, the lithographic steps, or the metalization processes, providing a flexible tool for quality control in GFET fabrication, as well as a valuable feedback for improving the material-growth process.
High-order continuum kinetic method for modeling plasma dynamics in phase space
Vogman, G. V.; Colella, P.; Shumlak, U.
2014-12-15
Continuum methods offer a high-fidelity means of simulating plasma kinetics. While computationally intensive, these methods are advantageous because they can be cast in conservation-law form, are not susceptible to noise, and can be implemented using high-order numerical methods. Advances in continuum method capabilities for modeling kinetic phenomena in plasmas require the development of validation tools in higher dimensional phase space and an ability to handle non-cartesian geometries. To that end, a new benchmark for validating Vlasov-Poisson simulations in 3D (x,vx,vy) is presented. The benchmark is based on the Dory-Guest-Harris instability and is successfully used to validate a continuum finite volumemore » algorithm. To address challenges associated with non-cartesian geometries, unique features of cylindrical phase space coordinates are described. Preliminary results of continuum kinetic simulations in 4D (r,z,vr,vz) phase space are presented.« less
Proximity graphs based multi-scale image segmentation
Skurikhin, Alexei N
2008-01-01
We present a novel multi-scale image segmentation approach based on irregular triangular and polygonal tessellations produced by proximity graphs. Our approach consists of two separate stages: polygonal seeds generation followed by an iterative bottom-up polygon agglomeration into larger chunks. We employ constrained Delaunay triangulation combined with the principles known from the visual perception to extract an initial ,irregular polygonal tessellation of the image. These initial polygons are built upon a triangular mesh composed of irregular sized triangles and their shapes are ad'apted to the image content. We then represent the image as a graph with vertices corresponding to the polygons and edges reflecting polygon relations. The segmentation problem is then formulated as Minimum Spanning Tree extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal grids by an iterative graph contraction constructing Minimum Spanning Tree. The contraction uses local information and merges the polygons bottom-up based on local region-and edge-based characteristics.
Multi-Scale Investigation of Sheared Flows In Magnetized Plasmas
Edward, Jr., Thomas
2014-09-19
Flows parallel and perpendicular to magnetic fields in a plasma are important phenomena in many areas of plasma science research. The presence of these spatially inhomogeneous flows is often associated with the stability of the plasma. In fusion plasmas, these sheared flows can be stabilizing while in space plasmas, these sheared flows can be destabilizing. Because of this, there is broad interest in understanding the coupling between plasma stability and plasma flows. This research project has engaged in a study of the plasma response to spatially inhomogeneous plasma flows using three different experimental devices: the Auburn Linear Experiment for Instability Studies (ALEXIS) and the Compact Toroidal Hybrid (CTH) stellarator devices at Auburn University, and the Space Plasma Simulation Chamber (SPSC) at the Naval Research Laboratory. This work has shown that there is a commonality of the plasma response to sheared flows across a wide range of plasma parameters and magnetic field geometries. The goal of this multi-device, multi-scale project is to understand how sheared flows established by the same underlying physical mechanisms lead to different plasma responses in fusion, laboratory, and space plasmas.
Modeling and Evaluation of Geophysical Methods for Monitoring and Tracking CO2 Migration
Daniels, Jeff
2012-11-30
Geological sequestration has been proposed as a viable option for mitigating the vast amount of CO{sub 2} being released into the atmosphere daily. Test sites for CO{sub 2} injection have been appearing across the world to ascertain the feasibility of capturing and sequestering carbon dioxide. A major concern with full scale implementation is monitoring and verifying the permanence of injected CO{sub 2}. Geophysical methods, an exploration industry standard, are non-invasive imaging techniques that can be implemented to address that concern. Geophysical methods, seismic and electromagnetic, play a crucial role in monitoring the subsurface pre- and post-injection. Seismic techniques have been the most popular but electromagnetic methods are gaining interest. The primary goal of this project was to develop a new geophysical tool, a software program called GphyzCO2, to investigate the implementation of geophysical monitoring for detecting injected CO{sub 2} at test sites. The GphyzCO2 software consists of interconnected programs that encompass well logging, seismic, and electromagnetic methods. The software enables users to design and execute 3D surface-to-surface (conventional surface seismic) and borehole-to-borehole (cross-hole seismic and electromagnetic methods) numerical modeling surveys. The generalized flow of the program begins with building a complex 3D subsurface geological model, assigning properties to the models that mimic a potential CO{sub 2} injection site, numerically forward model a geophysical survey, and analyze the results. A test site located in Warren County, Ohio was selected as the test site for the full implementation of GphyzCO2. Specific interest was placed on a potential reservoir target, the Mount Simon Sandstone, and cap rock, the Eau Claire Formation. Analysis of the test site included well log data, physical property measurements (porosity), core sample resistivity measurements, calculating electrical permittivity values, seismic data
Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.
2010-05-04
A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.
Yunovich, M.; Thompson, N.G.
1998-12-31
During the past fifteen years corrosion inhibiting admixtures (CIAs) have become increasingly popular for protection of reinforced components of highway bridges and other structures from damage induced by chlorides. However, there remains considerable debate about the benefits of CIAs in concrete. A variety of testing methods to assess the performance of CIA have been reported in the literature, ranging from tests in simulated pore solutions to long-term exposures of concrete slabs. The paper reviews the published techniques and recommends the methods which would make up a comprehensive CIA effectiveness testing program. The results of this set of tests would provide the data which can be used to rank the presently commercially available CIA and future candidate formulations utilizing a proposed predictive model. The model is based on relatively short-term laboratory testing and considers several phases of a service life of a structure (corrosion initiation, corrosion propagation without damage, and damage to the structure).
An efficient modeling method for thermal stratification simulation in a BWR suppression pool
Haihua Zhao; Ling Zou; Hongbin Zhang; Hua Li; Walter Villanueva; Pavel Kudinov
2012-09-01
The suppression pool in a BWR plant not only is the major heat sink within the containment system, but also provides major emergency cooling water for the reactor core. In several accident scenarios, such as LOCA and extended station blackout, thermal stratification tends to form in the pool after the initial rapid venting stage. Accurately predicting the pool stratification phenomenon is important because it affects the peak containment pressure; and the pool temperature distribution also affects the NPSHa (Available Net Positive Suction Head) and therefore the performance of the pump which draws cooling water back to the core. Current safety analysis codes use 0-D lumped parameter methods to calculate the energy and mass balance in the pool and therefore have large uncertainty in prediction of scenarios in which stratification and mixing are important. While 3-D CFD methods can be used to analyze realistic 3D configurations, these methods normally require very fine grid resolution to resolve thin substructures such as jets and wall boundaries, therefore long simulation time. For mixing in stably stratified large enclosures, the BMIX++ code has been developed to implement a highly efficient analysis method for stratification where the ambient fluid volume is represented by 1-D transient partial differential equations and substructures such as free or wall jets are modeled with 1-D integral models. This allows very large reductions in computational effort compared to 3-D CFD modeling. The POOLEX experiments at Finland, which was designed to study phenomena relevant to Nordic design BWR suppression pool including thermal stratification and mixing, are used for validation. GOTHIC lumped parameter models are used to obtain boundary conditions for BMIX++ code and CFD simulations. Comparison between the BMIX++, GOTHIC, and CFD calculations against the POOLEX experimental data is discussed in detail.
Mathematical and computational modeling of the diffraction problems by discrete singularities method
Nesvit, K. V.
2014-11-12
The main objective of this study is reduced the boundary-value problems of scattering and diffraction waves on plane-parallel structures to the singular or hypersingular integral equations. For these cases we use a method of the parametric representations of the integral and pseudo-differential operators. Numerical results of the model scattering problems on periodic and boundary gratings and also on the gratings above a flat screen reflector are presented in this paper.
Shi, Xing; Lin, Guang; Zou, Jianfeng; Fedosov, Dmitry A.
2013-07-20
To model red blood cell (RBC) deformation in flow, the recently developed LBM-DLM/FD method ([Shi and Lim, 2007)29], derived from the lattice Boltzmann method and the distributed Lagrange multiplier/fictitious domain methodthe fictitious domain method, is extended to employ the mesoscopic network model for simulations of red blood cell deformation. The flow is simulated by the lattice Boltzmann method with an external force, while the network model is used for modeling red blood cell deformation and the fluid-RBC interaction is enforced by the Lagrange multiplier. To validate parameters of the RBC network model, sThe stretching numerical tests on both coarse and fine meshes are performed and compared with the corresponding experimental data to validate the parameters of the RBC network model. In addition, RBC deformation in pipe flow and in shear flow is simulated, revealing the capacity of the current method for modeling RBC deformation in various flows.
A Novel method for modeling the recoil in W boson events at hadron collider
Abazov, Victor Mukhamedovich; Abbott, Braden Keim; Abolins, Maris A.; Acharya, Bannanje Sripath; Adams, Mark Raymond; Adams, Todd; Aguilo, Ernest; Ahsan, Mahsana; Alexeev, Guennadi D.; Alkhazov, Georgiy D.; Alton, Andrew K.; /Michigan U. /Augustana Coll., Sioux Falls /Northeastern U.
2009-07-01
We present a new method for modeling the hadronic recoil in W {yields} {ell}{nu} events produced at hadron colliders. The recoil is chosen from a library of recoils in Z {yields} {ell}{ell} data events and overlaid on a simulated W {yields} {ell}{nu} event. Implementation of this method requires that the data recoil library describe the properties of the measured recoil as a function of the true, rather than the measured, transverse momentum of the boson. We address this issue using a multidimensional Bayesian unfolding technique. We estimate the statistical and systematic uncertainties from this method for the W boson mass and width measurements assuming 1 fb{sup -1} of data from the Fermilab Tevatron. The uncertainties are found to be small and comparable to those of a more traditional parameterized recoil model. For the high precision measurements that will be possible with data from Run II of the Fermilab Tevatron and from the CERN LHC, the method presented in this paper may be advantageous, since it does not require an understanding of the measured recoil from first principles.
Crystal Plasticity Model of Reactor Pressure Vessel Embrittlement in GRIZZLY
Chakraborty, Pritam; Biner, Suleyman Bulent; Zhang, Yongfeng; Spencer, Benjamin Whiting
2015-07-01
The integrity of reactor pressure vessels (RPVs) is of utmost importance to ensure safe operation of nuclear reactors under extended lifetime. Microstructure-scale models at various length and time scales, coupled concurrently or through homogenization methods, can play a crucial role in understanding and quantifying irradiation-induced defect production, growth and their influence on mechanical behavior of RPV steels. A multi-scale approach, involving atomistic, meso- and engineering-scale models, is currently being pursued within the GRIZZLY project to understand and quantify irradiation-induced embrittlement of RPV steels. Within this framework, a dislocation-density based crystal plasticity model has been developed in GRIZZLY that captures the effect of irradiation-induced defects on the flow stress behavior and is presented in this report. The present formulation accounts for the interaction between self-interstitial loops and matrix dislocations. The model predictions have been validated with experiments and dislocation dynamics simulation.
Simulation of Thermal Stratification in BWR Suppression Pools with One Dimensional Modeling Method
Haihua Zhao; Ling Zou; Hongbin Zhang
2014-01-01
The suppression pool in a boiling water reactor (BWR) plant not only is the major heat sink within the containment system, but also provides the major emergency cooling water for the reactor core. In several accident scenarios, such as a loss-of-coolant accident and extended station blackout, thermal stratification tends to form in the pool after the initial rapid venting stage. Accurately predicting the pool stratification phenomenon is important because it affects the peak containment pressure; the pool temperature distribution also affects the NPSHa (available net positive suction head) and therefore the performance of the Emergency Core Cooling System and Reactor Core Isolation Cooling System pumps that draw cooling water back to the core. Current safety analysis codes use zero dimensional (0-D) lumped parameter models to calculate the energy and mass balance in the pool; therefore, they have large uncertainties in the prediction of scenarios in which stratification and mixing are important. While three-dimensional (3-D) computational fluid dynamics (CFD) methods can be used to analyze realistic 3-D configurations, these methods normally require very fine grid resolution to resolve thin substructures such as jets and wall boundaries, resulting in a long simulation time. For mixing in stably stratified large enclosures, the BMIX++ code (Berkeley mechanistic MIXing code in C++) has been developed to implement a highly efficient analysis method for stratification where the ambient fluid volume is represented by one-dimensional (1-D) transient partial differential equations and substructures (such as free or wall jets) are modeled with 1-D integral models. This allows very large reductions in computational effort compared to multi-dimensional CFD modeling. One heat-up experiment performed at the Finland POOLEX facility, which was designed to study phenomena relevant to Nordic design BWR suppression pool including thermal stratification and mixing, is used for
A Method for Modeling Household Occupant Behavior to Simulate Residential Energy Consumption
Johnson, Brandon J; Starke, Michael R; Abdelaziz, Omar; Jackson, Roderick K; Tolbert, Leon M
2014-01-01
This paper presents a statistical method for modeling the behavior of household occupants to estimate residential energy consumption. Using data gathered by the U.S. Census Bureau in the American Time Use Survey (ATUS), actions carried out by survey respondents are categorized into ten distinct activities. These activities are defined to correspond to the major energy consuming loads commonly found within the residential sector. Next, time varying minute resolution Markov chain based statistical models of different occupant types are developed. Using these behavioral models, individual occupants are simulated to show how an occupant interacts with the major residential energy consuming loads throughout the day. From these simulations, the minimum number of occupants, and consequently the minimum number of multiple occupant households, needing to be simulated to produce a statistically accurate representation of aggregate residential behavior can be determined. Finally, future work will involve the use of these occupant models along side residential load models to produce a high-resolution energy consumption profile and estimate the potential for demand response from residential loads.
Plys, Martin; Burelbach, James; Lee, Sung Jin; Apthorpe, Robert
2013-07-01
A unified modeling method applicable to the processing, shipping, and storage of spent nuclear fuel and sludge has been incrementally developed, validated, and applied over a period of about 15 years at the US DOE Hanford site. The software, FATE{sup TM}, provides a consistent framework for a wide dynamic range of common DOE and commercial fuel and waste applications. It has been used during the design phase, for safety and licensing calculations, and offers a graded approach to complex modeling problems encountered at DOE facilities and abroad (e.g., Sellafield). FATE has also been used for commercial power plant evaluations including reactor building fire modeling for fire PRA, evaluation of hydrogen release, transport, and flammability for post-Fukushima vulnerability assessment, and drying of commercial oxide fuel. FATE comprises an integrated set of models for fluid flow, aerosol and contamination release, transport, and deposition, thermal response including chemical reactions, and evaluation of fire and explosion hazards. It is one of few software tools that combine both source term and thermal-hydraulic capability. Practical examples are described below, with consideration of appropriate model complexity and validation. (authors)
Model based approach to UXO imaging using the time domain electromagnetic method
Lavely, E.M.
1999-04-01
Time domain electromagnetic (TDEM) sensors have emerged as a field-worthy technology for UXO detection in a variety of geological and environmental settings. This success has been achieved with commercial equipment that was not optimized for UXO detection and discrimination. The TDEM response displays a rich spatial and temporal behavior which is not currently utilized. Therefore, in this paper the author describes a research program for enhancing the effectiveness of the TDEM method for UXO detection and imaging. Fundamental research is required in at least three major areas: (a) model based imaging capability i.e. the forward and inverse problem, (b) detector modeling and instrument design, and (c) target recognition and discrimination algorithms. These research problems are coupled and demand a unified treatment. For example: (1) the inverse solution depends on solution of the forward problem and knowledge of the instrument response; (2) instrument design with improved diagnostic power requires forward and inverse modeling capability; and (3) improved target recognition algorithms (such as neural nets) must be trained with data collected from the new instrument and with synthetic data computed using the forward model. Further, the design of the appropriate input and output layers of the net will be informed by the results of the forward and inverse modeling. A more fully developed model of the TDEM response would enable the joint inversion of data collected from multiple sensors (e.g., TDEM sensors and magnetometers). Finally, the author suggests that a complementary approach to joint inversions is the statistical recombination of data using principal component analysis. The decomposition into principal components is useful since the first principal component contains those features that are most strongly correlated from image to image.
MacAlpine, Sara; Deline, Chris
2015-09-15
It is often difficult to model the effects of partial shading conditions on PV array performance, as shade losses are nonlinear and depend heavily on a system's particular configuration. This work describes and implements a simple method for modeling shade loss: a database of shade impact results (loss percentages), generated using a validated, detailed simulation tool and encompassing a wide variety of shading scenarios. The database is intended to predict shading losses in crystalline silicon PV arrays and is accessed using basic inputs generally available in any PV simulation tool. Performance predictions using the database are within 1-2% of measured data for several partially shaded PV systems, and within 1% of those predicted by the full, detailed simulation tool on an annual basis. The shade loss database shows potential to considerably improve performance prediction for partially shaded PV systems.
Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2
Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.
1983-05-01
As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.
Toni Smithl; Lyudmila V. Slipchenko; Mark S. Gordon
2008-02-27
This study compares the results of the general effective fragment potential (EFP2) method to the results of a previous combined coupled cluster with single, double, and perturbative triple excitations [CCSD(T)] and symmetry-adapted perturbation theory (SAPT) study [Sinnokrot and Sherrill, J. Am. Chem. Soc., 2004, 126, 7690] on substituent effects in {pi}-{pi} interactions. EFP2 is found to accurately model the binding energies of the benzene-benzene, benzene-phenol, benzene-toluene, benzene-fluorobenzene, and benzene-benzonitrile dimers, as compared with high-level methods [Sinnokrot and Sherrill, J. Am. Chem. Soc., 2004, 126, 7690], but at a fraction of the computational cost of CCSD(T). In addition, an EFP-based Monte Carlo/simulated annealing study was undertaken to examine the potential energy surface of the substituted dimers.
MODELING RESONANCE INTERFERENCE BY 0-D SLOWING-DOWN SOLUTION WITH EMBEDDED SELF-SHIELDING METHOD
U.S. Department of Energy (DOE) all webpages (Extended Search)
MODELING RESONANCE INTERFERENCE BY 0-D SLOWING-DOWN SOLUTION WITH EMBEDDED SELF-SHIELDING METHOD Yuxuan Liu and William Martin Department of Nuclear Engineering and Radiological Sciences University of Michigan 2355 Bonisteel Blvd., Ann Arbor, MI, 48109 yuxuanl@umich.edu; wrm@umich.edu Kang-Seog Kim and Mark Williams Oak Ridge National Laboratory One Bethel Valley Road, P.O. Box 2008, Oak Ridge, TN 37831-6172, USA kimk1@ornl.gov; williamsml@ornl.gov ABSTRACT The resonance integral table based
High-order continuum kinetic method for modeling plasma dynamics in phase space
Vogman, G. V.; Colella, P.; Shumlak, U.
2014-12-15
Continuum methods offer a high-fidelity means of simulating plasma kinetics. While computationally intensive, these methods are advantageous because they can be cast in conservation-law form, are not susceptible to noise, and can be implemented using high-order numerical methods. Advances in continuum method capabilities for modeling kinetic phenomena in plasmas require the development of validation tools in higher dimensional phase space and an ability to handle non-cartesian geometries. To that end, a new benchmark for validating Vlasov-Poisson simulations in 3D (x,v_{x},v_{y}) is presented. The benchmark is based on the Dory-Guest-Harris instability and is successfully used to validate a continuum finite volume algorithm. To address challenges associated with non-cartesian geometries, unique features of cylindrical phase space coordinates are described. Preliminary results of continuum kinetic simulations in 4D (r,z,v_{r},v_{z}) phase space are presented.
Zhen, Yi; Zhang, Xinyuan; Wang, Ningli E-mail: puj@upmc.edu; Gu, Suicheng; Meng, Xin; Zheng, Bin; Pu, Jiantao E-mail: puj@upmc.edu
2014-09-15
Purpose: A novel algorithm is presented to automatically identify the retinal vessels depicted in color fundus photographs. Methods: The proposed algorithm quantifies the contrast of each pixel in retinal images at multiple scales and fuses the resulting consequent contrast images in a progressive manner by leveraging their spatial difference and continuity. The multiscale strategy is to deal with the variety of retinal vessels in width, intensity, resolution, and orientation; and the progressive fusion is to combine consequent images and meanwhile avoid a sudden fusion of image noise and/or artifacts in space. To quantitatively assess the performance of the algorithm, we tested it on three publicly available databases, namely, DRIVE, STARE, and HRF. The agreement between the computer results and the manual delineation in these databases were quantified by computing their overlapping in both area and length (centerline). The measures include sensitivity, specificity, and accuracy. Results: For the DRIVE database, the sensitivities in identifying vessels in area and length were around 90% and 70%, respectively, the accuracy in pixel classification was around 99%, and the precisions in terms of both area and length were around 94%. For the STARE database, the sensitivities in identifying vessels were around 90% in area and 70% in length, and the accuracy in pixel classification was around 97%. For the HRF database, the sensitivities in identifying vessels were around 92% in area and 83% in length for the healthy subgroup, around 92% in area and 75% in length for the glaucomatous subgroup, around 91% in area and 73% in length for the diabetic retinopathy subgroup. For all three subgroups, the accuracy was around 98%. Conclusions: The experimental results demonstrate that the developed algorithm is capable of identifying retinal vessels depicted in color fundus photographs in a relatively reliable manner.
Shell model method for Gamow-Teller transitions in heavy, deformed nuclei
Gao Zaochun [Joint Institute for Nuclear Astrophysics and Department of Physics, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, Michigan 48824 (United States); China Institute of Atomic Energy, P.O. Box 275 (18), Beijing 102413 (China); Sun Yang [Joint Institute for Nuclear Astrophysics and Department of Physics, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Chen, Y.-S. [China Institute of Atomic Energy, P.O. Box 275(18), Beijing 102413 (China); Institute of Theoretical Physics, Academia Sinica, Beijing 100080 (China)
2006-11-15
A method for calculation of Gamow-Teller transition rates is developed by using the concept of the Projected Shell Model (PSM). The shell model basis is constructed by superimposing angular-momentum-projected multiquasiparticle configurations, and nuclear wave functions are obtained by diagonalizing the two-body interactions in these projected states. Calculation of transition matrix elements in the PSM framework is discussed in detail, and the effects caused by the Gamow-Teller residual forces and by configuration-mixing are studied. With this method, it may become possible to perform a state-by-state calculation for {beta}-decay and electron-capture rates in heavy, deformed nuclei at finite temperatures. Our first example indicates that, while experimentally known Gamow-Teller transition rates from the ground state of the parent nucleus are reproduced, stronger transitions from some low-lying excited states are predicted to occur, which may considerably enhance the total decay rates once these nuclei are exposed to hot stellar environments.
A hybrid transport-diffusion model for radiative transfer in absorbing and scattering media
Roger, M.; Caliot, C.; Crouseilles, N.; Coelho, P.J.
2014-10-15
A new multi-scale hybrid transport-diffusion model for radiative transfer is proposed in order to improve the efficiency of the calculations close to the diffusive regime, in absorbing and strongly scattering media. In this model, the radiative intensity is decomposed into a macroscopic component calculated by the diffusion equation, and a mesoscopic component. The transport equation for the mesoscopic component allows to correct the estimation of the diffusion equation, and then to obtain the solution of the linear radiative transfer equation. In this work, results are presented for stationary and transient radiative transfer cases, in examples which concern solar concentrated and optical tomography applications. The Monte Carlo and the discrete-ordinate methods are used to solve the mesoscopic equation. It is shown that the multi-scale model allows to improve the efficiency of the calculations when the medium is close to the diffusive regime. The proposed model is a good alternative for radiative transfer at the intermediate regime where the macroscopic diffusion equation is not accurate enough and the radiative transfer equation requires too much computational effort.
Corcelli, S.A.; Kress, J.D.; Pratt, L.R.
1995-08-07
This paper develops and characterizes mixed direct-iterative methods for boundary integral formulations of continuum dielectric solvation models. We give an example, the Ca{sup ++}{hor_ellipsis}Cl{sup {minus}} pair potential of mean force in aqueous solution, for which a direct solution at thermal accuracy is difficult and, thus for which mixed direct-iterative methods seem necessary to obtain the required high resolution. For the simplest such formulations, Gauss-Seidel iteration diverges in rare cases. This difficulty is analyzed by obtaining the eigenvalues and the spectral radius of the non-symmetric iteration matrix. This establishes that those divergences are due to inaccuracies of the asymptotic approximations used in evaluation of the matrix elements corresponding to accidental close encounters of boundary elements on different atomic spheres. The spectral radii are then greater than one for those diverging cases. This problem is cured by checking for boundary element pairs closer than the typical spatial extent of the boundary elements and for those cases performing an ``in-line`` Monte Carlo integration to evaluate the required matrix elements. These difficulties are not expected and have not been observed for the thoroughly coarsened equations obtained when only a direct solution is sought. Finally, we give an example application of hybrid quantum-classical methods to deprotonation of orthosilicic acid in water.
Taylor, G.; Dong, C.; Sun, S.
2010-03-18
A mathematical model for contaminant species passing through fractured porous media is presented. In the numerical model, we combine two locally conservative methods, i.e. mixed finite element (MFE) and the finite volume methods. Adaptive triangle mesh is used for effective treatment of the fractures. A hybrid MFE method is employed to provide an accurate approximation of velocities field for both the fractures and matrix which are crucial to the convection part of the transport equation. The finite volume method and the standard MFE method are used to approximate the convection and dispersion terms respectively. The model is used to investigate the interaction of adsorption with transport and to extract information on effective adsorption distribution coefficients. Numerical examples in different fractured media illustrate the robustness and efficiency of the proposed numerical model.
Sershen, Cheryl L.; Plimpton, Steven J.; May, Elebeoba E.
2016-02-15
Mycobacterium tuberculosis associated granuloma formation can be viewed as a structural immune response that can contain and halt the spread of the pathogen. In several mammalian hosts, including non-human primates, Mtb granulomas are often hypoxic, although this has not been observed in wild type murine infection models. While a presumed consequence, the structural contribution of the granuloma to oxygen limitation and the concomitant impact on Mtb metabolic viability and persistence remains to be fully explored. We develop a multiscale computational model to test to what extent in vivo Mtb granulomas become hypoxic, and investigate the effects of hypoxia on hostmore » immune response efficacy and mycobacterial persistence. Our study integrates a physiological model of oxygen dynamics in the extracellular space of alveolar tissue, an agent-based model of cellular immune response, and a systems biology-based model of Mtb metabolic dynamics. Our theoretical studies suggest that the dynamics of granuloma organization mediates oxygen availability and illustrates the immunological contribution of this structural host response to infection outcome. Furthermore, our integrated model demonstrates the link between structural immune response and mechanistic drivers influencing Mtbs adaptation to its changing microenvironment and the qualitative infection outcome scenarios of clearance, containment, dissemination, and a newly observed theoretical outcome of transient containment. We observed hypoxic regions in the containment granuloma similar in size to granulomas found in mammalian in vivo models of Mtb infection. In the case of the containment outcome, our model uniquely demonstrates that immune response mediated hypoxic conditions help foster the shift down of bacteria through two stages of adaptation similar to thein vitro non-replicating persistence (NRP) observed in the Wayne model of Mtb dormancy. Lastly, the adaptation in part contributes to the ability of Mtb to
Fix, N. J.
2008-01-31
The purpose of the project is to conduct research at an Integrated Field-Scale Research Challenge Site in the Hanford Site 300 Area, CERCLA OU 300-FF-5 (Figure 1), to investigate multi-scale mass transfer processes associated with a subsurface uranium plume impacting both the vadose zone and groundwater. The project will investigate a series of science questions posed for research related to the effect of spatial heterogeneities, the importance of scale, coupled interactions between biogeochemical, hydrologic, and mass transfer processes, and measurements/approaches needed to characterize a mass-transfer dominated system. The research will be conducted by evaluating three (3) different hypotheses focused on multi-scale mass transfer processes in the vadose zone and groundwater, their influence on field-scale U(VI) biogeochemistry and transport, and their implications to natural systems and remediation. The project also includes goals to 1) provide relevant materials and field experimental opportunities for other ERSD researchers and 2) generate a lasting, accessible, and high-quality field experimental database that can be used by the scientific community for testing and validation of new conceptual and numerical models of subsurface reactive transport.
A simple method to account for drift orbit effects when modeling radio frequency heating in tokamaks
Eester, D. van
2005-09-26
In the last years tremendous progress was made in modeling radio frequency heating in tokamaks. Not only the adopted models have gradually become more realistic, also the present generation of computers has allowed to study wave-particle interaction effects with previously unattainable detail. In the present paper a semi-analytical method is adopted to evaluate the dielectric response of a plasma to electromagnetic waves in the ion cyclotron domain of frequencies accounting for drift orbit effects in an axisymmetric tokamak. The method relies on subdividing the orbit into elementary segments in which the integrations can be performed analytically or by tabulation, and it hinges on the local bookkeeping of the relation between the variables defining an orbit and those describing the magnetic geometry. Although the method allows computation of elementary building blocks for either the wave or the Fokker-Planck equation, the focus here is on the latter. Using the coefficients evaluated using the proposed semi-analytical method, a 3-D Fokker-Planck code was developed which accounts for the radial width of the guiding center orbits and thus not only describes RF induced velocity space diffusion, but equally accounts for the RF induced radial drift. Preliminary results of this new 3-D Fokker-Planck code are presented. The adopted numerical resolution relies on a subdivision of the integration domain in tetrahedres. This specific shape of the elementary volumes allows imposing the boundary conditions (in particular the nonlocal conditions across the curved trapped/passing boundary connecting one trapped to two passing orbits) elegantly. The particular chosen shape also readily permits zooming in on regions where more detail is required. Casting the equation in its weak Galerkin form, it is solved relying on the finite element technique. Unless special attention is devoted to the optimization of the inversion of the system of linear equations resulting from projecting the
Michael R Tonks; Yongfeng Zhang; Xianming Bai
2014-06-01
This report summarizes development work funded by the Nuclear Energy Advanced Modeling Simulation program's Fuels Product Line (FPL) to develop a mechanistic model for the average grain size in UO? fuel. The model is developed using a multiscale modeling and simulation approach involving atomistic simulations, as well as mesoscale simulations using INL's MARMOT code.
Kumar, Aditya; Shi, Ruijie; Kumar, Rajeeva; Dokucu, Mustafa
2013-04-09
Control system and method for controlling an integrated gasification combined cycle (IGCC) plant are provided. The system may include a controller coupled to a dynamic model of the plant to process a prediction of plant performance and determine a control strategy for the IGCC plant over a time horizon subject to plant constraints. The control strategy may include control functionality to meet a tracking objective and control functionality to meet an optimization objective. The control strategy may be configured to prioritize the tracking objective over the optimization objective based on a coordinate transformation, such as an orthogonal or quasi-orthogonal projection. A plurality of plant control knobs may be set in accordance with the control strategy to generate a sequence of coordinated multivariable control inputs to meet the tracking objective and the optimization objective subject to the prioritization resulting from the coordinate transformation.
Adaptive Forward Modeling Method for Analysis and Reconstructions of Orientation Image Map
Energy Science and Technology Software Center (OSTI)
2014-06-01
IceNine is a MPI-parallel orientation reconstruction and microstructure analysis code. It's primary purpose is to reconstruct a spatially resolved orientation map given a set of diffraction images from a high energy x-ray diffraction microscopy (HEDM) experiment (1). In particular, IceNine implements the adaptive version of the forward modeling method (2, 3). Part of IceNine is a library used to for conbined analysis of the microstructure with the experimentally measured diffraction signal. The libraries is alsomore » designed for tapid prototyping of new reconstruction and analysis algorithms. IceNine is also built with a simulator of diffraction images with an input microstructure.« less
Signal processing Model/Method for Recovering Acoustic Reflectivity of Spot Weld
Energy Science and Technology Software Center (OSTI)
2005-09-08
Until recently, U.S. auto manufacturers have inspected the veracity of welds in the auto bodies they build by using destructive tear-down, which typically results in more than $1 M of scrappage per plant per year. Much of this expense could possibly be avoided with a nondestructive technique (and 100% instead of 1% inspection could be achieved). Recent advances in ultrasound probes promise to provide a sufficiently accurate non-destructive evaluation technique, but the necessary signal processingmore » has not yet been developed. This disclosure describes a signal processing model and method useful for diagnosing the veracity of spot welds between two sheets of the same thickness from ultrasound signals Standard systems theory describes a signal as a convolution of a transducer function, h(t), and an impulse train (beta(t), tau(t)) [1] (see Eq. (1) attached). With a Gaussian wavelet as a transducer function, this model describes the signal from an ultrasound probe quite well, and the literature provides many methods for "deconvolution," for recovery of the impulse train from the signal [see e.g., 2-3]. What is novel about the technique disclosed is the model that describes the impulse train as a function of reflectivity, the share of energy incident on the interface that is reflected, and that allows the recovery of its estimated value. The reflectivity estimate provides an ideal indicator of weld veracity, compressing each signal into a single value between 0 and 1, which can then be displayed as a 2d greyscale or colormap of the weld. The model describing the system is attached as Eqs. (2). These equations account for the energy in the probe-side and opposite sheets. In each period, this energy is a sum of that reflected from the same sheet plus that transmitted from the opposite (dampened by material attenuation at rate a). This model is consistent with physical first principles (in particular the First and Second Laws of Thermodynamics) and has been verified
Evaluation of Test Methods for Triaxially Braided Composites using a Meso-Scale Finite Element Model
Zhang, Chao
2015-10-01
The characterization of triaxially braided composite is complicate due to the nonuniformity of deformation within the unit cell as well as the possibility of the freeedge effect related to the large size of the unit cell. Extensive experimental investigation has been conducted to develop more accurate test approaches in characterizing the actual mechanical properties of the material we are studying. In this work, a meso-scale finite element model is utilized to simulate two complex specimens: notched tensile specimen and tube tensile specimen, which are designed to avoid the free-edge effect and free-edge effect induced premature edge damage. The full field strain data is predicted numerically and compared with experimental data obtained by Digit Image Correlation. The numerically predicted tensile strength values are compared with experimentally measured results. The discrepancy between numerically predicted and experimentally measured data, the capability of different test approaches are analyzed and discussed. The presented numerical model could serve as assistance to the evaluation of different test methods, and is especially useful in identifying potential local damage events.
L. Pan; Y. Seol; G. Bodvarsson
2004-04-29
The dual-continuum random-walk particle tracking approach is an attractive simulation method for simulating transport in a fractured porous medium. In order to be truly successful for such a model, however, the key issue is to properly simulate the mass transfer between the fracture and matrix continua. In a recent paper, Pan and Bodvarsson (2002) proposed an improved scheme for simulating fracture-matrix mass transfer, by introducing the concept of activity range into the calculation of fracture-matrix particle-transfer probability. By comparing with analytical solutions, they showed that their scheme successfully captured the transient diffusion depth into the matrix without any additional subgrid (matrix) cells. This technical note presents an expansion of their scheme to cases in which significant water flow through the fracture-matrix interface exists. The dual-continuum particle tracker with this new scheme was found to be as accurate as a numerical model using a more detailed grid. The improved scheme can be readily incorporated into the existing particle-tracking code, while still maintaining the advantage of needing no additional matrix cells to capture transient features of particle penetration into the matrix.
Commercial Implementation of Model-Based Manufacturing of Nanostructured Metals
Lowe, Terry C.
2012-07-24
Computational modeling is an essential tool for commercial production of nanostructured metals. Strength is limited by imperfections at the high strength levels that are achievable in nanostructured metals. Processing to achieve homogeneity at the micro- and nano-scales is critical. Manufacturing of nanostructured metals is intrinsically a multi-scale problem. Manufacturing of nanostructured metal products requires computer control, monitoring and modeling. Large scale manufacturing of bulk nanostructured metals by Severe Plastic Deformation is a multi-scale problem. Computational modeling at all scales is essential. Multiple scales of modeling must be integrated to predict and control nanostructural, microstructural, macrostructural product characteristics and production processes.
Langton, C.; Kosson, D.
2009-11-30
Cementitious barriers for nuclear applications are one of the primary controls for preventing or limiting radionuclide release into the environment. At the present time, performance and risk assessments do not fully incorporate the effectiveness of engineered barriers because the processes that influence performance are coupled and complicated. Better understanding the behavior of cementitious barriers is necessary to evaluate and improve the design of materials and structures used for radioactive waste containment, life extension of current nuclear facilities, and design of future nuclear facilities, including those needed for nuclear fuel storage and processing, nuclear power production and waste management. The focus of the Cementitious Barriers Partnership (CBP) literature review is to document the current level of knowledge with respect to: (1) mechanisms and processes that directly influence the performance of cementitious materials (2) methodologies for modeling the performance of these mechanisms and processes and (3) approaches to addressing and quantifying uncertainties associated with performance predictions. This will serve as an important reference document for the professional community responsible for the design and performance assessment of cementitious materials in nuclear applications. This review also provides a multi-disciplinary foundation for identification, research, development and demonstration of improvements in conceptual understanding, measurements and performance modeling that would be lead to significant reductions in the uncertainties and improved confidence in the estimating the long-term performance of cementitious materials in nuclear applications. This report identifies: (1) technology gaps that may be filled by the CBP project and also (2) information and computational methods that are in currently being applied in related fields but have not yet been incorporated into performance assessments of cementitious barriers. The various
Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D
2012-10-01
Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.
Lindquist, W. Brent; Jones, Keith W.; Um, Wooyong; Rockhold, mark; Peters, Catherine A.; Celia, Michael A.
2013-02-15
secondary mineral precipitates (cancrinite), conducting experiments under conditions with and without Al allowed us to experimentally separate the conditions that lead to quartz dissolution from the conditions that lead to quartz dissolution plus cancrinite precipitation. Consistent with our expectations, in the experiments without Al, there was a substantial reduction in volume of the solid matrix. With Al there was a net increase in the volume of the solid matrix. The rate and extent of reaction was found to increase with temperature. These results demonstrate a successful effort to identify conditions that lead to increases and conditions that lead to decreases in solid matrix volume due to reactions of caustic tank wastes with quartz sands. In addition, we have begun to work with slightly larger, intermediate-scale columns packed with Hanford natural sediments and quartz. Similar dissolution and precipitation were observed in these colums. The measurements are being interpreted with reactive transport modeling using STOMP; preliminary observations are reported here. 2) Multi-Scale Imaging and Analysis. Mineral dissolution and precipitation rates within a porous medium will be different in different pores due to natural heterogeneity and the heterogeneity that is created from the reactions themselves. We used a combination of X-ray computed microtomography, backscattered electron and energy dispersive X-ray spectroscopy combined with computational image analysis to quantify pore structure, mineral distribution, structure changes and fluid-air and fluid-grain interfaces. ? Results and Key Findings: Three of the columns from the reactive flow experiments at PNNL (S1, S3, S4) were imaged using 3D X-ray computed microtomography (XCMT) at BNL and analyzed using 3DMA-rock at SUNY Stony Brook. The imaging results support the mass balance findings reported by Dr. Ums group, regarding the substantial dissolution of quartz in column S1. An important observation is that of grain
Kramer, Sharlotte Lorraine Bolyard; Scherzinger, William M.
2014-09-01
The Virtual Fields Method (VFM) is an inverse method for constitutive model parameter identication that relies on full-eld experimental measurements of displacements. VFM is an alternative to standard approaches that require several experiments of simple geometries to calibrate a constitutive model. VFM is one of several techniques that use full-eld exper- imental data, including Finite Element Method Updating (FEMU) techniques, but VFM is computationally fast, not requiring iterative FEM analyses. This report describes the im- plementation and evaluation of VFM primarily for nite-deformation plasticity constitutive models. VFM was successfully implemented in MATLAB and evaluated using simulated FEM data that included representative experimental noise found in the Digital Image Cor- relation (DIC) optical technique that provides full-eld displacement measurements. VFM was able to identify constitutive model parameters for the BCJ plasticity model even in the presence of simulated DIC noise, demonstrating VFM as a viable alternative inverse method. Further research is required before VFM can be adopted as a standard method for constitu- tive model parameter identication, but this study is a foundation for ongoing research at Sandia for improving constitutive model calibration.
A Multi-Methods Approach to HRA and Human Performance Modeling: A Field Assessment
Jacques Hugo; David I Gertman
2012-06-01
The Advanced Test Reactor (ATR) is a research reactor at the Idaho National Laboratory is primarily designed and used to test materials to be used in other, larger-scale and prototype reactors. The reactor offers various specialized systems and allows certain experiments to be run at their own temperature and pressure. The ATR Canal temporarily stores completed experiments and used fuel. It also has facilities to conduct underwater operations such as experiment examination or removal. In reviewing the ATR safety basis, a number of concerns were identified involving the ATR canal. A brief study identified ergonomic issues involving the manual handling of fuel elements in the canal that may increase the probability of human error and possible unwanted acute physical outcomes to the operator. In response to this concern, that refined the previous HRA scoping analysis by determining the probability of the inadvertent exposure of a fuel element to the air during fuel movement and inspection was conducted. The HRA analysis employed the SPAR-H method and was supplemented by information gained from a detailed analysis of the fuel inspection and transfer tasks. This latter analysis included ergonomics, work cycles, task duration, and workload imposed by tool and workplace characteristics, personal protective clothing, and operational practices that have the potential to increase physical and mental workload. Part of this analysis consisted of NASA-TLX analyses, combined with operational sequence analysis, computational human performance analysis (CHPA), and 3D graphical modeling to determine task failures and precursors to such failures that have safety implications. Experience in applying multiple analysis techniques in support of HRA methods is discussed.
Haihua Zhao; Per F. Peterson
2010-10-01
Thermal mixing and stratification phenomena play major roles in the safety of reactor systems with large enclosures, such as containment safety in current fleet of LWRs, long-term passive containment cooling in Gen III+ plants including AP-1000 and ESBWR, the cold and hot pool mixing in pool type sodium cooled fast reactor systems (SFR), and reactor cavity cooling system behavior in high temperature gas cooled reactors (HTGR), etc. Depending on the fidelity requirement and computational resources, 0-D steady state models (heat transfer correlations), 0-D lumped parameter based transient models, 1-D physical-based coarse grain models, and 3-D CFD models are available. Current major system analysis codes either have no models or only 0-D models for thermal stratification and mixing, which can only give highly approximate results for simple cases. While 3-D CFD methods can be used to analyze simple configurations, these methods require very fine grid resolution to resolve thin substructures such as jets and wall boundaries. Due to prohibitive computational expenses for long transients in very large volumes, 3-D CFD simulations remain impractical for system analyses. For mixing in stably stratified large enclosures, UC Berkeley developed 1-D models basing on Zubers hierarchical two-tiered scaling analysis (HTTSA) method where the ambient fluid volume is represented by 1-D transient partial differential equations and substructures such as free or wall jets are modeled with 1-D integral models. This allows very large reductions in computational effort compared to 3-D CFD modeling. This paper will present an overview on important thermal mixing and stratification phenomena in large enclosures for different reactors, major modeling methods and their advantages and limits, potential paths to improve simulation capability and reduce analysis uncertainty in this area for advanced reactor system analysis tools.
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Miller, Gregory H.; Forest, Gregory
2011-12-22
We present a new multiscale model for complex uids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic di#11;erential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a #12;nite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
Samala, Ravi K. Chan, Heang-Ping; Lu, Yao; Hadjiiski, Lubomir; Wei, Jun; Helvie, Mark A.; Sahiner, Berkman
2014-02-15
Purpose: Develop a computer-aided detection (CADe) system for clustered microcalcifications in digital breast tomosynthesis (DBT) volume enhanced with multiscale bilateral filtering (MSBF) regularization. Methods: With Institutional Review Board approval and written informed consent, two-view DBT of 154 breasts, of which 116 had biopsy-proven microcalcification (MC) clusters and 38 were free of MCs, was imaged with a General Electric GEN2 prototype DBT system. The DBT volumes were reconstructed with MSBF-regularized simultaneous algebraic reconstruction technique (SART) that was designed to enhance MCs and reduce background noise while preserving the quality of other tissue structures. The contrast-to-noise ratio (CNR) of MCs was further improved with enhancement-modulated calcification response (EMCR) preprocessing, which combined multiscale Hessian response to enhance MCs by shape and bandpass filtering to remove the low-frequency structured background. MC candidates were then located in the EMCR volume using iterative thresholding and segmented by adaptive region growing. Two sets of potential MC objects, cluster centroid objects and MC seed objects, were generated and the CNR of each object was calculated. The number of candidates in each set was controlled based on the breast volume. Dynamic clustering around the centroid objects grouped the MC candidates to form clusters. Adaptive criteria were designed to reduce false positive (FP) clusters based on the size, CNR values and the number of MCs in the cluster, cluster shape, and cluster based maximum intensity projection. Free-response receiver operating characteristic (FROC) and jackknife alternative FROC (JAFROC) analyses were used to assess the performance and compare with that of a previous study. Results: Unpaired two-tailedt-test showed a significant increase (p < 0.0001) in the ratio of CNRs for MCs with and without MSBF regularization compared to similar ratios for FPs. For view-based detection, a
Freeze, G.A.; Larson, K.W.; Davies, P.B.
1995-10-01
Eight alternative methods for approximating salt creep and disposal room closure in a multiphase flow model of the Waste Isolation Pilot Plant (WIPP) were implemented and evaluated: Three fixed-room geometries three porosity functions and two fluid-phase-salt methods. The pressure-time-porosity line interpolation method is the method used in current WIPP Performance Assessment calculations. The room closure approximation methods were calibrated against a series of room closure simulations performed using a creep closure code, SANCHO. The fixed-room geometries did not incorporate a direct coupling between room void volume and room pressure. The two porosity function methods that utilized moles of gas as an independent parameter for closure coupling. The capillary backstress method was unable to accurately simulate conditions of re-closure of the room. Two methods were found to be accurate enough to approximate the effects of room closure; the boundary backstress method and pressure-time-porosity line interpolation. The boundary backstress method is a more reliable indicator of system behavior due to a theoretical basis for modeling salt deformation as a viscous process. It is a complex method and a detailed calibration process is required. The pressure lines method is thought to be less reliable because the results were skewed towards SANCHO results in simulations where the sequence of gas generation was significantly different from the SANCHO gas-generation rate histories used for closure calibration. This limitation in the pressure lines method is most pronounced at higher gas-generation rates and is relatively insignificant at lower gas-generation rates. Due to its relative simplicity, the pressure lines method is easier to implement in multiphase flow codes and simulations have a shorter execution time.
Wang, Chao-Ying; Li, Chen-liang; Wu, Guo-Xun; Wang, Bao-Lai; Yang, Li-Jun; Zhao, Wei; Meng, Qing-Yuan
2014-01-28
The multi-scale simulation method is employed to investigate how defects affect the performances of Li-ion batteries (LIBs). The stable positions, binding energies and dynamics properties of Li impurity in Si with a 30 partial dislocation and stacking fault (SF) have been studied in comparison with the ideal crystal. It is found that the most table position is the tetrahedral (T{sub d}) site and the diffusion barrier is 0.63?eV in bulk Si. In the 30 partial dislocation core and SF region, the most stable positions are at the centers of the octagons (Oct-A and Oct-B) and pentahedron (site S), respectively. In addition, Li dopant may tend to congregate in these defects. The motion of Li along the dislocation core are carried out by the transport among the Oct-A (Oct-B) sites with the barrier of 1.93?eV (1.12?eV). In the SF region, the diffusion barrier of Li is 0.91?eV. These two types of defects may retard the fast migration of Li dopant that is finally trapped by them. Thus, the presence of the 30 partial dislocation and SF may deactivate the Li impurity and lead to low rate capability of LIB.
Method for quantifying the prediction uncertainties associated with water quality models
Summers, J.K.; Wilson, H.T.; Kou, J.
1993-01-01
Many environmental regulatory agencies depend on models to organize, understand, and utilize the information for regulatory decision making. A general analytical protocol was developed to quantify prediction error associated with commonly used surface water quality models. Its application is demonstrated by comparing water quality models configured to represent different levels of spatial, temporal, and mechanistic complexity. This comparison can be accomplished by fitting the models to a benchmark data set. Once the models are successfully fitted to the benchmark data, the prediction errors associated with each application can be quantified using the Monte Carlo simulation techniques.
Multiscale analysis of nonlinear systems using computational homology
Konstantin Mischaikow; Michael Schatz; William Kalies; Thomas Wanner
2010-05-24
This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization
Multiscale analysis of nonlinear systems using computational homology
Konstantin Mischaikow, Rutgers University /Georgia Institute of Technology, Michael Schatz, Georgia Institute of Technology, William Kalies, Florida Atlantic University, Thomas Wanner,George Mason University
2010-05-19
This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization
Phifer, Mark A.; Smith, Frank G. III
2013-06-21
A 3-D STOMP model has been developed for the Portsmouth On-Site Waste Disposal Facility (OSWDF) at Site D as outlined in Appendix K of FBP 2013. This model projects the flow and transport of the following radionuclides to various points of assessments: Tc-99, U-234, U-235, U-236, U-238, Am-241, Np-237, Pu-238, Pu-239, Pu-240, Th-228, and Th-230. The model includes the radioactive decay of these parents, but does not include the associated daughter ingrowth because the STOMP model does not have the capability to model daughter ingrowth. The Savannah River National Laboratory (SRNL) provides herein a recommended method to account for daughter ingrowth in association with the Portsmouth OSWDF Performance Assessment (PA) modeling.
Thornton, J.W.; McDowell, T.P.; Hughes, P.J.
1997-09-01
The results of five practical vertical ground heat exchanger sizing programs are compared against a detailed simulation model that has been calibrated to monitored data taken from one military family housing unit at Fort Polk, Louisiana. The calibration of the detailed model to data is described in a companion paper. The assertion that the data/detailed model is a useful benchmark for practical sizing methods is based on this calibration. The results from the comparisons demonstrate the current level of agreement between vertical ground heat exchanger sizing methods in common use. It is recommended that the calibration and comparison exercise be repeated with data sets from additional sites in order to build confidence in the practical sizing methods.
EFFECTS OF PORE STRUCTURE CHANGE AND MULTI-SCALE HETEROGENEITY...
Office of Scientific and Technical Information (OSTI)
As a whole, this research generated a better understanding of reactive transport in porous media, and resulted in more accurate methods for reaction rate upscaling and improved ...
Xu, Zhijie; Li, Dongsheng; Xu, Wei; Devaraj, Arun; Colby, Robert J.; Thevuthasan, Suntharampillai; Geiser, B. P.; Larson, David J.
2015-04-01
In atom probe tomography (APT), accurate reconstruction of the spatial positions of field evaporated ions from measured detector patterns depends upon a correct understanding of the dynamic tip shape evolution and evaporation laws of component atoms. Artifacts in APT reconstructions of heterogeneous materials can be attributed to the assumption of homogeneous evaporation of all the elements in the material in addition to the assumption of a steady state hemispherical dynamic tip shape evolution. A level set method based specimen shape evolution model is developed in this study to simulate the evaporation of synthetic layered-structured APT tips. The simulation results of the shape evolution by the level set model qualitatively agree with the finite element method and the literature data using the finite difference method. The asymmetric evolving shape predicted by the level set model demonstrates the complex evaporation behavior of heterogeneous tip and the interface curvature can potentially lead to the artifacts in the APT reconstruction of such materials. Compared with other APT simulation methods, the new method provides smoother interface representation with the aid of the intrinsic sub-grid accuracy. Two evaporation models (linear and exponential evaporation laws) are implemented in the level set simulations and the effect of evaporation laws on the tip shape evolution is also presented.
Juxiu Tong; Bill X. Hu; Hai Huang; Luanjin Guo; Jinzhong Yang
2014-03-01
With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations, we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.
Change of variables as a method to study general ?-models: Bulk universality
Shcherbina, M.
2014-04-15
We consider ? matrix models with real analytic potentials. Assuming that the corresponding equilibrium density ? has a one-interval support (without loss of generality ? = [?2, 2]), we study the transformation of the correlation functions after the change of variables ?{sub i} ? ?(?{sub i}) with ?(?) chosen from the equation ?{sup ?}(?)?(?(?)) = ?{sub sc}(?), where ?{sub sc}(?) is the standard semicircle density. This gives us the deformed ?-model which has an additional interaction term. Standard transformation with the Gaussian integral allows us to show that the deformed ?-model may be reduced to the standard Gaussian ?-model with a small perturbation n{sup ?1}h(?). This reduces most of the problems of local and global regimes for ?-models to the corresponding problems for the Gaussian ?-model with a small perturbation. In the present paper, we prove the bulk universality of local eigenvalue statistics for both one-cut and multi-cut cases.
SEISMIC AND ROCK PHYSICS DIAGNOSTICS OF MULTISCALE RESERVOIR TEXTURES
Gary Mavko
2003-10-01
As part of our study on ''Relationships between seismic properties and rock microstructure'', we have (1) Studied relationships between velocity and permeability. (2) Used independent experimental methods to measure the elastic moduli of clay minerals as functions of pressure and saturation. (3) Applied different statistical methods for characterizing heterogeneity and textures from scanning acoustic microscope (SAM) images of shale microstructures. (4) Analyzed the directional dependence of velocity and attenuation in different reservoir rocks (5) Compared Vp measured under hydrostatic and non-hydrostatic stress conditions in sands. (6) Studied stratification as a source of intrinsic anisotropy in sediments using Vp and statistical methods for characterizing textures in sands.
Numerical method to test a theoretical model of the quantum interferen...
Office of Scientific and Technical Information (OSTI)
A numerical method is provided to fit the experimental conductivity to the complicated conductivity expression for the quantum interference effect of Anderson localization. This ...
Application of Gaseous Sphere Injection Method for Modeling Under-expanded H2 Injection
Whitesides, R; Hessel, R P; Flowers, D L; Aceves, S M
2010-12-03
A methodology for modeling gaseous injection has been refined and applied to recent experimental data from the literature. This approach uses a discrete phase analogy to handle gaseous injection, allowing for addition of gaseous injection to a CFD grid without needing to resolve the injector nozzle. This paper focuses on model testing to provide the basis for simulation of hydrogen direct injected internal combustion engines. The model has been updated to be more applicable to full engine simulations, and shows good agreement with experiments for jet penetration and time-dependent axial mass fraction, while available radial mass fraction data is less well predicted.
Gering, Kevin L
2013-08-27
A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constant-current pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constant-current pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.
RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1
1995-08-01
The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.
Modeling Slag Penetration and Refractory Degradation Using the Finite Element Method
Johnson, Kenneth I.; Williford, Ralph E.; Matyas, Josef; Pilli, Siva Prasad; Sundaram, S. K.; Korolev, Vladimir N.
2008-09-01
Refractory degradation due to slag penetration can significantly reduce the service life of gasifier refractory linings. This paper describes a modeling approach that was developed to predict refractory spalling as a function of operating temperature, coal feedstock and refractory type. The model simulates the coupled thermal, diffusion, and mechanical interactions of coal slag with refractory ceramics. The heat transfer and slag diffusion solutions are directly coupled through a temperature-dependent effective diffusivity for slag penetration. The effective diffusivity is defined from slag penetration tests conducted in our laboratories on specific coal slag and refractory combinations. Chemically-induced swelling of the refractory and the build-up of mechanical stresses are functions of the slag penetration. The model results are compared with analytical spalling models and validated by experimental data in order to develop an efficient refractory degradation model for implementation in a systems level gasifier model. The ultimate goal of our research is to provide a tool that will help optimize gasifier performance by balancing conversion efficiency with refractory life.
Multiscale Toxicology - Building the Next Generation Tools for Toxicology
Thrall, Brian D.; Minard, Kevin R.; Teeguarden, Justin G.; Waters, Katrina M.
2012-09-01
A Cooperative Research and Development Agreement (CRADA) was sponsored by Battelle Memorial Institute (Battelle, Columbus), to initiate a collaborative research program across multiple Department of Energy (DOE) National Laboratories aimed at developing a suite of new capabilities for predictive toxicology. Predicting the potential toxicity of emerging classes of engineered nanomaterials was chosen as one of two focusing problems for this program. PNNLs focus toward this broader goal was to refine and apply experimental and computational tools needed to provide quantitative understanding of nanoparticle dosimetry for in vitro cell culture systems, which is necessary for comparative risk estimates for different nanomaterials or biological systems. Research conducted using lung epithelial and macrophage cell models successfully adapted magnetic particle detection and fluorescent microscopy technologies to quantify uptake of various forms of engineered nanoparticles, and provided experimental constraints and test datasets for benchmark comparison against results obtained using an in vitro computational dosimetry model, termed the ISSD model. The experimental and computational approaches developed were used to demonstrate how cell dosimetry is applied to aid in interpretation of genomic studies of nanoparticle-mediated biological responses in model cell culture systems. The combined experimental and theoretical approach provides a highly quantitative framework for evaluating relationships between biocompatibility of nanoparticles and their physical form in a controlled manner.
Lesch, David A; Adriaan Sachtler, J.W. J.; Low, John J; Jensen, Craig M; Ozolins, Vidvuds; Siegel, Don
2011-02-14
UOP LLC, a Honeywell Company, Ford Motor Company, and Striatus, Inc., collaborated with Professor Craig Jensen of the University of Hawaii and Professor Vidvuds Ozolins of University of California, Los Angeles on a multi-year cost-shared program to discover novel complex metal hydrides for hydrogen storage. This innovative program combined sophisticated molecular modeling with high throughput combinatorial experiments to maximize the probability of identifying commercially relevant, economical hydrogen storage materials with broad application. A set of tools was developed to pursue the medium throughput (MT) and high throughput (HT) combinatorial exploratory investigation of novel complex metal hydrides for hydrogen storage. The assay programs consisted of monitoring hydrogen evolution as a function of temperature. This project also incorporated theoretical methods to help select candidate materials families for testing. The Virtual High Throughput Screening served as a virtual laboratory, calculating structures and their properties. First Principles calculations were applied to various systems to examine hydrogen storage reaction pathways and the associated thermodynamics. The experimental program began with the validation of the MT assay tool with NaAlH4/0.02 mole Ti, the state of the art hydrogen storage system given by decomposition of sodium alanate to sodium hydride, aluminum metal, and hydrogen. Once certified, a combinatorial 21-point study of the NaAlH4 â LiAlH4 âMg(AlH4)2 phase diagram was investigated with the MT assay. Stability proved to be a problem as many of the materials decomposed during synthesis, altering the expected assay results. This resulted in repeating the entire experiment with a mild milling approach, which only temporarily increased capacity. NaAlH4 was the best performer in both studies and no new mixed alanates were observed, a result consistent with the VHTS. Powder XRD suggested that the reverse reaction, the regeneration
Multiscale Speciation of U and Pu at Chernobyl, Hanford, Los Alamos,
U.S. Department of Energy (DOE) all webpages (Extended Search)
McGuire AFB, Mayak, and Rocky Flats | Stanford Synchrotron Radiation Lightsource Multiscale Speciation of U and Pu at Chernobyl, Hanford, Los Alamos, McGuire AFB, Mayak, and Rocky Flats Friday, June 26, 2015 X-ray fluorescence maps of (clockwise from upper right) Ga, U, Ca, Pu, Ti, and K in a 350 micron PuO2-UO2 composite particle produced by the fire that consumed a nuclear armed BOMARC missile at McGuire AFB in 1960, measured with the two micron focused x-ray beam at SSRL. EST June 2,
SEISMIC AND ROCK PHYSICS DIAGNOSTICS OF MULTISCALE RESERVOIR TEXTURES
Gary Mavko
2003-06-30
As part of our study on ''Relationships between seismic properties and rock microstructure'', we have studied (1) Methods for detection of stress-induced velocity anisotropy in sands. (2) We have initiated efforts for velocity upscaling to quantify long-wavelength and short-wavelength velocity behavior and the scale-dependent dispersion caused by sediment variability in different depositional environments.
Rubert-Nason, Patricia; Mavrikakis, Manos; Maravelias, Christos T.; Grabow, Lars C.; Biegler, Lorenz T.
2014-04-01
Microkinetic models, combined with experimentally measured reaction rates and orders, play a key role in elucidating detailed reaction mechanisms in heterogeneous catalysis and have typically been solved as systems of ordinary differential equations. In this work, we demonstrate a new approach to fitting those models to experimental data. For the specific example treated here, by reformulating a typical microkinetic model for a continuous stirred tank reactor to a system of nonlinear equations, we achieved a 1000-fold increase in solution speed. The reduced computational cost allows a more systematic search of the parameter space, leading to better fits to the available experimental data. We applied this approach to the problem of methanol synthesis by CO/CO2 hydrogenation over a supported-Cu catalyst, an important catalytic reaction with a large industrial interest and potential for large-scale CO2 chemical fixation.
Dakota uncertainty quantification methods applied to the NEK-5000 SAHEX model.
Weirs, V. Gregory
2014-03-01
This report summarizes the results of a NEAMS project focused on the use of uncertainty and sensitivity analysis methods within the NEK-5000 and Dakota software framework for assessing failure probabilities as part of probabilistic risk assessment. NEK-5000 is a software tool under development at Argonne National Laboratory to perform computational fluid dynamics calculations for applications such as thermohydraulics of nuclear reactor cores. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. The goal of this work is to demonstrate the use of uncertainty quantification methods in Dakota with NEK-5000.
Clement, T Prabhakar; Barnett, Mark O; Zheng, Chunmiao; Jones, Norman L
2010-05-05
DE-FG02-06ER64213: Development of Modeling Methods and Tools for Predicting Coupled Reactive Transport Processes in Porous Media at Multiple Scales Investigators: T. Prabhakar Clement (PD/PI) and Mark O. Barnett (Auburn), Chunmiao Zheng (Univ. of Alabama), and Norman L. Jones (BYU). The objective of this project was to develop scalable modeling approaches for predicting the reactive transport of metal contaminants. We studied two contaminants, a radioactive cation [U(VI)] and a metal(loid) oxyanion system [As(III/V)], and investigated their interactions with two types of subsurface materials, iron and manganese oxyhydroxides. We also developed modeling methods for describing the experimental results. Overall, the project supported 25 researchers at three universities. Produced 15 journal articles, 3 book chapters, 6 PhD dissertations and 6 MS theses. Three key journal articles are: 1) Jeppu et al., A scalable surface complexation modeling framework for predicting arsenate adsorption on goethite-coated sands, Environ. Eng. Sci., 27(2): 147-158, 2010. 2) Loganathan et al., Scaling of adsorption reactions: U(VI) experiments and modeling, Applied Geochemistry, 24 (11), 2051-2060, 2009. 3) Phillippi, et al., Theoretical solid/solution ratio effects on adsorption and transport: uranium (VI) and carbonate, Soil Sci. Soci. of America, 71:329-335, 2007
Sensitivity of the Properties of Ruthenium Blue Dimer to Method, Basis Set, and Continuum Model
Ozkanlar, Abdullah; Clark, Aurora E.
2012-05-23
The ruthenium blue dimer [(bpy)2RuIIIOH2]2O4+ is best known as the first well-defined molecular catalyst for water oxidation. It has been subject to numerous computational studies primarily employing density functional theory. However, those studies have been limited in the functionals, basis sets, and continuum models employed. The controversy in the calculated electronic structure and the reaction energetics of this catalyst highlights the necessity of benchmark calculations that explore the role of density functionals, basis sets, and continuum models upon the essential features of blue-dimer reactivity. In this paper, we report Kohn-Sham complete basis set (KS-CBS) limit extrapolations of the electronic structure of blue dimer using GGA (BPW91 and BP86), hybrid-GGA (B3LYP), and meta-GGA (M06-L) density functionals. The dependence of solvation free energy corrections on the different cavity types (UFF, UA0, UAHF, UAKS, Bondi, and Pauling) within polarizable and conductor-like polarizable continuum model has also been investigated. The most common basis sets of double-zeta quality are shown to yield results close to the KS-CBS limit; however, large variations are observed in the reaction energetics as a function of density functional and continuum cavity model employed.
Gering, Kevin L.
2013-01-01
A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a Butler-Volmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulse-time dependence, electrode surface availability, or a combination thereof. A set of sigmoid-based expressions may be included with the modified-BV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modified-BV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.
Buceta, D.; Tojo, C.; Vukmirovic, M.; Deepak, F. L.; Arturo Lopez-Quintela, M.
2015-07-14
We present a theoretical model to predict the atomic structure of Au/Pt nanoparticles synthesized in microemulsions. Excellent concordance with the experimental results shows that the structure of the nanoparticles can be controlled at sub-nanometer resolution simply by changing the reactants concentration. The results of this study not only offer a better understanding of the complex mechanisms governing reactions in microemulsions, but open up a simple new way to synthesize bimetallic nanoparticles with ad-hoc controlled nanostructures.
Scenario driven data modelling: a method for integrating diverse sources of data and data streams
Brettin, Thomas S.; Cottingham, Robert W.; Griffith, Shelton D.; Quest, Daniel J.
2015-09-08
A system and method of integrating diverse sources of data and data streams is presented. The method can include selecting a scenario based on a topic, creating a multi-relational directed graph based on the scenario, identifying and converting resources in accordance with the scenario and updating the multi-directed graph based on the resources, identifying data feeds in accordance with the scenario and updating the multi-directed graph based on the data feeds, identifying analytical routines in accordance with the scenario and updating the multi-directed graph using the analytical routines and identifying data outputs in accordance with the scenario and defining queries to produce the data outputs from the multi-directed graph.
Elevated Temperature Primary Load Design Method Using Pseudo Elastic-Perfectly Plastic Model
Carter, Peter; Sham, Sam; Jetter, Robert I
2012-01-01
A new primary load design method for elevated temperature service has been developed. Codification of the procedure in an ASME Boiler and Pressure Vessel Code, Section III Code Case is being pursued. The proposed primary load design method is intended to provide the same margins on creep rupture, yielding and creep deformation for a component or structure that are implicit in the allowable stress data. It provides a methodology that does not require stress classification and is also applicable to a full range of temperature above and below the creep regime. Use of elastic-perfectly plastic analysis based on allowable stress with corrections for constraint, steady state stress and creep ductility is described. This approach is intended to ensure that traditional primary stresses are the basis for design, taking into account ductility limits to stress re-distribution and multiaxial rupture criteria.
Load Modeling and State Estimation Methods for Power Distribution Systems: Final Report
Tom McDermott
2010-05-07
The project objective was to provide robust state estimation for distribution systems, comparable to what has been available on transmission systems for decades. This project used an algorithm called Branch Current State Estimation (BCSE), which is more effective than classical methods because it decouples the three phases of a distribution system, and uses branch current instead of node voltage as a state variable, which is a better match to current measurement.
Chen, E.P.; Costin, L.S.
1991-12-31
Pretest analysis of a heated block test, proposed for the Exploratory Studies Facility at Yucca Mountain, Nevada, was conducted in this investigation. Specifically, the study focuses on the evaluation of the various designs to drill holes and cut slots for the block. The thermal/mechanical analysis was based on the finite element method and a compliant-joint rock-mass constitutive model. Based on the calculated results, relative merits of the various test designs are discussed.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Application of Distribution Transformer Thermal Life Models to Electrified Vehicle Charging Loads Using Monte-Carlo Method Preprint Michael Kuss, Tony Markel, and William Kramer Presented at the 25th World Battery, Hybrid and Fuel Cell Electric Vehicle Symposium & Exhibition Shenzhen, China November 5 - 9, 2010 Conference Paper NREL/CP-5400-48827 January 2011 NOTICE The submitted manuscript has been offered by an employee of the Alliance for Sustainable Energy, LLC (Alliance), a contractor
Bickford, D F; Diemer, Jr, R B
1985-01-01
The redox state of glass from electric melters with complex feed compositions is determined by balance between gases above the melt, and transition metals and organic compounds in the feed. Part I discusses experimental and computational methods of relating flowrates and other melter operating conditions to the redox state of glass, and composition of the melter offgas. Computerized thermodynamic computational methods are useful in predicting the sequence and products of redox reactions and in assessing individual process variations. Melter redox state can be predicted by combining monitoring of melter operating conditions, redox measurement of fused melter feed samples, and periodic redox measurement of product. Mossbauer spectroscopy, and other methods which measure Fe(II)/Fe(III) in glass, can be used to measure melter redox state. Part II develops preliminary operating limits for the vitrification of High-Level Radioactive Waste. Limits on reducing potential to preclude the accumulation of combustible gases, accumulation of sulfides and selenides, and degradation of melter components are the most critical. Problems associated with excessively oxidizing conditions, such as glass foaming and potential ruthenium volatility, are controlled when sufficient formic acid is added to adjust melter feed rheology.
Seismic and Rockphysics Diagnostics of Multiscale Reservoir Textures
Gary Mavko
2005-07-01
This final technical report summarizes the results of the work done in this project. The main objective was to quantify rock microstructures and their effects in terms of elastic impedances in order to quantify the seismic signatures of microstructures. Acoustic microscopy and ultrasonic measurements were used to quantify microstructures and their effects on elastic impedances in sands and shales. The project led to the development of technologies for quantitatively interpreting rock microstructure images, understanding the effects of sorting, compaction and stratification in sediments, and linking elastic data with geologic models to estimate reservoir properties. For the public, ultimately, better technologies for reservoir characterization translates to better reservoir development, reduced risks, and hence reduced energy costs.
Buceta, David; Tojo, Concha; Vukmirovic, Miomir B.; Deepak, F. Leonard; Lopez-Quintela, M. Arturo
2015-06-02
In this study, we present a theoretical model to predict the atomic structure of Au/Pt nanoparticles synthesized in microemulsions. Excellent concordance with the experimental results shows that the structure of the nanoparticles can be controlled at sub-nanometer resolution simply by changing the reactants concentration. The results of this study not only offer a better understanding of the complex mechanisms governing reactions in microemulsions, but open up a simple new way to synthesize bimetallic nanoparticles with ad-hoc controlled nanostructures.
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; Lee, Yun -Young; Lim, Young -Kwon; Prabhat, -
2015-05-22
This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic to planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; et al
2015-05-22
This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so
Jason Heath; Brian McPherson; Thomas Dewers
2011-03-15
The assessment of caprocks for geologic CO{sub 2} storage is a multi-scale endeavor. Investigation of a regional caprock - the Kirtland Formation, San Juan Basin, USA - at the pore-network scale indicates high capillary sealing capacity and low permeabilities. Core and wellscale data, however, indicate a potential seal bypass system as evidenced by multiple mineralized fractures and methane gas saturations within the caprock. Our interpretation of {sup 4}He concentrations, measured at the top and bottom of the caprock, suggests low fluid fluxes through the caprock: (1) Of the total {sup 4}He produced in situ (i.e., at the locations of sampling) by uranium and thorium decay since deposition of the Kirtland Formation, a large portion still resides in the pore fluids. (2) Simple advection-only and advection-diffusion models, using the measured {sup 4}He concentrations, indicate low permeability ({approx}10-20 m{sup 2} or lower) for the thickness of the Kirtland Formation. These findings, however, do not guarantee the lack of a large-scale bypass system. The measured data, located near the boundary conditions of the models (i.e., the overlying and underlying aquifers), limit our testing of conceptual models and the sensitivity of model parameterization. Thus, we suggest approaches for future studies to better assess the presence or lack of a seal bypass system at this particular site and for other sites in general.
Thermoacoustic wave propagation modeling using a dynamically adaptive wavelet collocation method
Vasilyev, O.V.; Paolucci, S.
1996-12-31
When a localized region of a solid wall surrounding a compressible medium is subjected to a sudden temperature change, the medium in the immediate neighborhood of that region expands. This expansion generates pressure waves. These thermally-generated waves are referred to as thermoacoustic (TAC) waves. The main interest in thermoacoustic waves is motivated by their property to enhance heat transfer by inducing convective motion away from the heated area. Thermoacoustic wave propagation in a two-dimensional rectangular cavity is studied numerically. The thermoacoustic waves are generated by raising the temperature locally at the walls. The waves, which decay at large time due to thermal and viscous diffusion, propagate and reflect from the walls creating complicated two-dimensional patterns. The accuracy of numerical simulation is ensured by using a highly accurate, dynamically adaptive, multilevel wavelet collocation method, which allows local refinements to adapt to local changes in solution scales. Subsequently, high resolution computations are performed only in regions of large gradients. The computational cost of the method is independent of the dimensionality of the problem and is O(N), where N is the total number of collation points.
Predictive Simulation and Design of Materials by Quasicontinuum and Accelerated Dynamics Methods
Luskin, Mitchell; James, Richard; Tadmor, Ellad
2014-03-30
This project developed the hyper-QC multiscale method to make possible the computation of previously inaccessible space and time scales for materials with thermally activated defects. The hyper-QC method combines the spatial coarse-graining feature of a finite temperature extension of the quasicontinuum (QC) method (aka “hot-QC”) with the accelerated dynamics feature of hyperdynamics. The hyper-QC method was developed, optimized, and tested from a rigorous mathematical foundation.
Methods for modeling impact-induced reactivity changes in small reactors.
Tallman, Tyler N.; Radel, Tracy E.; Smith, Jeffrey A.; Villa, Daniel L.; Smith, Brandon M.; Radel, Ross F.; Lipinski, Ronald J.; Wilson, Paul Philip Hood
2010-10-01
This paper describes techniques for determining impact deformation and the subsequent reactivity change for a space reactor impacting the ground following a potential launch accident or for large fuel bundles in a shipping container following an accident. This technique could be used to determine the margin of subcriticality for such potential accidents. Specifically, the approach couples a finite element continuum mechanics model (Pronto3D or Presto) with a neutronics code (MCNP). DAGMC, developed at the University of Wisconsin-Madison, is used to enable MCNP geometric queries to be performed using Pronto3D output. This paper summarizes what has been done historically for reactor launch analysis, describes the impact criticality analysis methodology, and presents preliminary results using representative reactor designs.
Sabtaji, Agung E-mail: agung.sabtaji@bmkg.go.id; Nugraha, Andri Dian
2015-04-24
West Papua region has fairly high of seismicity activities due to tectonic setting and many inland faults. In addition, the region has a unique and complex tectonic conditions and this situation lead to high potency of seismic hazard in the region. The precise earthquake hypocenter location is very important, which could provide high quality of earthquake parameter information and the subsurface structure in this region to the society. We conducted 1-D P-wave velocity using earthquake data catalog from BMKG for April, 2009 up to March, 2014 around West Papua region. The obtained 1-D seismic velocity then was used as input for improving hypocenter location using double-difference method. The relocated hypocenter location shows fairly clearly the pattern of intraslab earthquake beneath New Guinea Trench (NGT). The relocated hypocenters related to the inland fault are also observed more focus in location around the fault.
Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert
2015-12-08
Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.
Modeling, mesh generation, and adaptive numerical methods for partial differential equations
Babuska, I.; Henshaw, W.D.; Oliger, J.E.; Flaherty, J.E.; Hopcroft, J.E.; Tezduyar, T.
1995-12-31
Mesh generation is one of the most time consuming aspects of computational solutions of problems involving partial differential equations. It is, furthermore, no longer acceptable to compute solutions without proper verification that specified accuracy criteria are being satisfied. Mesh generation must be related to the solution through computable estimates of discretization errors. Thus, an iterative process of alternate mesh and solution generation evolves in an adaptive manner with the end result that the solution is computed to prescribed specifications in an optimal, or at least efficient, manner. While mesh generation and adaptive strategies are becoming available, major computational challenges remain. One, in particular, involves moving boundaries and interfaces, such as free-surface flows and fluid-structure interactions. A 3-week program was held from July 5 to July 23, 1993 with 173 participants and 66 keynote, invited, and contributed presentations. This volume represents written versions of 21 of these lectures. These proceedings are organized roughly in order of their presentation at the workshop. Thus, the initial papers are concerned with geometry and mesh generation and discuss the representation of physical objects and surfaces on a computer and techniques to use this data to generate, principally, unstructured meshes of tetrahedral or hexahedral elements. The remainder of the papers cover adaptive strategies, error estimation, and applications. Several submissions deal with high-order p- and hp-refinement methods where mesh refinement/coarsening (h-refinement) is combined with local variation of method order (p-refinement). Combinations of mathematically verified and physically motivated approaches to error estimation are represented. Applications center on fluid mechanics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.
Comparison of two up-scaling methods in poroelasticity and its generalizations
Berryman, J G
2004-03-16
Two methods of up-scaling coupled equations at the microscale to equations valid at the mesoscale and/or macroscale for fluid-saturated and partially saturated porous media are discussed, compared, and contrasted. The two methods are: (1) two-scale and multiscale homogenization, and (2) volume averaging. Both these methods have advantages for some applications and disadvantages for others. For example, homogenization methods can give formulas for coefficients in the up-scaled equations, whereas volume averaging methods give the form of the up-scaled equations but generally must be supplemented with physical arguments and/or data in order to determine the coefficients. Homogenization theory requires a great deal of mathematical insight from the user in order to choose appropriate scalings for use in the resulting power-law expansions, while volume averaging requires more physical insight to motivate the steps needed to find coefficients. Homogenization often is performed on periodic models, while volume averaging does not require any assumption of periodicity and can therefore be related very directly to laboratory and/or field measurements. Validity of the homogenization process is often limited to specific ranges of frequency - in order to justify the scaling hypotheses that must be made - and therefore cannot be used easily over wide ranges of frequency. However, volume averaging methods can quite easily be used for wide band data analysis.
Geoelectrical Measurement of Multi-Scale Mass Transfer Parameters
Day-Lewis, Frederick David; Singha, Kamini; Johnson, Timothy C.; Haggerty, Roy; Binley, Andrew; Lane, John W.
2014-11-25
Mass transfer affects contaminant transport and is thought to control the efficiency of aquifer remediation at a number of sites within the Department of Energy (DOE) complex. An improved understanding of mass transfer is critical to meeting the enormous scientific and engineering challenges currently facing DOE. Informed design of site remedies and long-term stewardship of radionuclide-contaminated sites will require new cost-effective laboratory and field techniques to measure the parameters controlling mass transfer spatially and across a range of scales. In this project, we sought to capitalize on the geophysical signatures of mass transfer. Previous numerical modeling and pilot-scale field experiments suggested that mass transfer produces a geoelectrical signature—a hysteretic relation between sampled (mobile-domain) fluid conductivity and bulk (mobile + immobile) conductivity—over a range of scales relevant to aquifer remediation. In this work, we investigated the geoelectrical signature of mass transfer during tracer transport in a series of controlled experiments to determine the operation of controlling parameters, and also investigated the use of complex-resistivity (CR) as a means of quantifying mass transfer parameters in situ without tracer experiments. In an add-on component to our grant, we additionally considered nuclear magnetic resonance (NMR) to help parse mobile from immobile porosities. Including the NMR component, our revised study objectives were to: 1. Develop and demonstrate geophysical approaches to measure mass-transfer parameters spatially and over a range of scales, including the combination of electrical resistivity monitoring, tracer tests, complex resistivity, nuclear magnetic resonance, and materials characterization; and 2. Provide mass-transfer estimates for improved understanding of contaminant fate and transport at DOE sites, such as uranium transport at the Hanford 300 Area. To achieve our objectives, we implemented a 3
Advances in coupled safety modeling using systems analysis and high-fidelity methods.
Fanning, T. H.; Thomas, J. W.; Nuclear Engineering Division
2010-05-31
The potential for a sodium-cooled fast reactor to survive severe accident initiators with no damage has been demonstrated through whole-plant testing in EBR-II and FFTF. Analysis of the observed natural protective mechanisms suggests that they would be characteristic of a broad range of sodium-cooled fast reactors utilizing metal fuel. However, in order to demonstrate the degree to which new, advanced sodium-cooled fast reactor designs will possess these desired safety features, accurate, high-fidelity, whole-plant dynamics safety simulations will be required. One of the objectives of the advanced safety-modeling component of the Reactor IPSC is to develop a science-based advanced safety simulation capability by utilizing existing safety simulation tools coupled with emerging high-fidelity modeling capabilities in a multi-resolution approach. As part of this integration, an existing whole-plant systems analysis code has been coupled with a high-fidelity computational fluid dynamics code to assess the impact of high-fidelity simulations on safety-related performance. With the coupled capabilities, it is possible to identify critical safety-related phenomenon in advanced reactor designs that cannot be resolved with existing tools. In this report, the impact of coupling is demonstrated by evaluating the conditions of outlet plenum thermal stratification during a protected loss of flow transient. Outlet plenum stratification was anticipated to alter core temperatures and flows predicted during natural circulation conditions. This effect was observed during the simulations. What was not anticipated, however, is the far-reaching impact that resolving thermal stratification has on the whole plant. The high temperatures predicted at the IHX inlet due to thermal stratification in the outlet plenum forces heat into the intermediate system to the point that it eventually becomes a source of heat for the primary system. The results also suggest that flow stagnation in the
Using Multi-scale Dynamic Rupture Models to Improve Ground Motion...
Office of Scientific and Technical Information (OSTI)
Close Cite: Bibtex Format Close 0 pages in this document matching the terms "" Search For Terms: Enter terms in the toolbar above to search the full text of this document for ...
Multi-Scale Modeling Tools to Enable Manufacturing-Informed Design...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
More Documents & Publications Vehicle Technologies Office Merit Review 2014: Development of Computer-Aided Design Tools for Automotive Batteries Vehicle Technologies Office Merit ...
Using Multi-scale Dynamic Rupture Models to Improve Ground Motion...
Office of Scientific and Technical Information (OSTI)
Sponsoring Org: SC OFFICE OF ADVANCED SCIENTIFIC COMPUTING RESEARC Country of Publication: United States Language: ENGLISH Word Cloud More Like This Full Text preview image File ...
Watt-Sun: A Multi-Scale, Multi-Model, Machine-Learning Solar...
APPROACH watt sun diffusive irradiance gif w key.gif Similar to the recently demonstrated ... solar forecasting technology (short: Watt-sun), which leverages new data processing ...
Advanced modeling to accelerate the scale up of carbon capture technologies
Miller, David C.; Sun, XIN; Storlie, Curtis B.; Bhattacharyya, Debangsu
2015-06-01
In order to help meet the goals of the DOE carbon capture program, the Carbon Capture Simulation Initiative (CCSI) was launched in early 2011 to develop, demonstrate, and deploy advanced computational tools and validated multi-scale models to reduce the time required to develop and scale-up new carbon capture technologies. This article focuses on essential elements related to the development and validation of multi-scale models in order to help minimize risk and maximize learning as new technologies progress from pilot to demonstration scale.
Lab researchers develop models to analyze mixing in the ocean
U.S. Department of Energy (DOE) all webpages (Extended Search)
Lab researchers develop models to analyze mixing in the ocean Lab researchers develop models to analyze mixing in the ocean Researchers created models to quantify the horizontal and vertical structure of mixing in the ocean and its dependence upon eddy velocities. March 10, 2015 Three-dimensional calculated structure of ocean mixing. Three-dimensional calculated structure of ocean mixing. The Model for Prediction Across Scales-Ocean (MPAS-O) is a global, multiscale, ocean code that simulates
Mishra, Srikanta; Jin, Larry; He, Jincong; Durlofsky, Louis
2015-06-30
Reduced-order models provide a means for greatly accelerating the detailed simulations that will be required to manage CO_{2} storage operations. In this work, we investigate the use of one such method, POD-TPWL, which has previously been shown to be effective in oil reservoir simulation problems. This method combines trajectory piecewise linearization (TPWL), in which the solution to a new (test) problem is represented through a linearization around the solution to a previously-simulated (training) problem, with proper orthogonal decomposition (POD), which enables solution states to be expressed in terms of a relatively small number of parameters. We describe the application of POD-TPWL for CO_{2}-water systems simulated using a compositional procedure. Stanford’s Automatic Differentiation-based General Purpose Research Simulator (AD-GPRS) performs the full-order training simulations and provides the output (derivative matrices and system states) required by the POD-TPWL method. A new POD-TPWL capability introduced in this work is the use of horizontal injection wells that operate under rate (rather than bottom-hole pressure) control. Simulation results are presented for CO_{2} injection into a synthetic aquifer and into a simplified model of the Mount Simon formation. Test cases involve the use of time-varying well controls that differ from those used in training runs. Results of reasonable accuracy are consistently achieved for relevant well quantities. Runtime speedups of around a factor of 370 relative to full- order AD-GPRS simulations are achieved, though the preprocessing needed for POD-TPWL model construction corresponds to the computational requirements for about 2.3 full-order simulation runs. A preliminary treatment for POD-TPWL modeling in which test cases differ from training runs in terms of geological parameters (rather than well controls) is also presented. Results in this case involve only small differences between
Multi-scale thermalhydraulic analyses performed in Nuresim and Nurisp projects
Bestion, D.; Lucas, D.; Anglart, H.; Niceno, B.; Vyskocil, L.
2012-07-01
The NURESIM and NURISP successive projects of the 6. and 7. European Framework Programs joined the efforts of 21 partners for developing and validating a reference multi-physics and multi-scale platform for reactor simulation. The platform includes system codes, component codes, and also CFD or CMFD simulation tools. Fine scale CFD simulations are useful for a better understanding of physical processes, for the prediction of small scale geometrical effects and for solving problems that require a fine space and/or time resolution. Many important safety issues usually treated at the system scale may now benefit from investigations at a CFD scale. The Pressurized Thermal Shock is investigated using several simulation scales including Direct Numerical Simulation, Large Eddy Simulation, Very Large Eddy Simulation and RANS approaches. At the end a coupling of system code and CFD is applied. Condensation Induced Water-Hammer was also investigated at both CFD and 1-D scale. Boiling flow in a reactor core up to Departure from Nucleate Boiling or Dry-Out is investigated at scales much smaller than the classical subchannel analysis codes. DNS was used to investigate very local processes whereas CFD in both RANS and LES was used to simulate bubbly flow and Euler-Lagrange simulations were used for annular mist flow investigations. Loss of Coolant Accidents are usually treated by system codes. Some related issues are now revisited at the CFD scale. In each case the progress of the analysis is summarized and the benefit of the multi-scale approach is shown. (authors)
McNunn, Gabriel S; Bryden, Kenneth M
2013-01-01
Tarjan's algorithm schedules the solution of systems of equations by noting the coupling and grouping between the equations. Simulating complex systems, e.g., advanced power plants, aerodynamic systems, or the multi-scale design of components, requires the linkage of large groups of coupled models. Currently, this is handled manually in systems modeling packages. That is, the analyst explicitly defines both the method and solution sequence necessary to couple the models. In small systems of models and equations this works well. However, as additional detail is needed across systems and across scales, the number of models grows rapidly. This precludes the manual assembly of large systems of federated models, particularly in systems composed of high fidelity models. This paper examines extending Tarjan's algorithm from sets of equations to sets of models. The proposed implementation of the algorithm is demonstrated using a small one-dimensional system of federated models representing the heat transfer and thermal stress in a gas turbine blade with thermal barrier coating. Enabling the rapid assembly and substitution of different models permits the rapid turnaround needed to support the what-if kinds of questions that arise in engineering design.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-08-24
This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less
Kuss, M.; Markel, T.; Kramer, W.
2011-01-01
Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.
Modeling Deep Burn TRISO Particle Nuclear Fuel
Besmann, Theodore M [ORNL; Stoller, Roger E [ORNL; Samolyuk, German D [ORNL; Schuck, Paul C [ORNL; Rudin, Sven [Los Alamos National Laboratory (LANL); Wills, John [Los Alamos National Laboratory (LANL); Wirth, Brian D. [University of California, Berkeley; Kim, Sungtae [University of Wisconsin, Madison; Morgan, Dane [University of Wisconsin, Madison; Szlufarska, Izabela [University of Wisconsin, Madison
2012-01-01
Under the DOE Deep Burn program TRISO fuel is being investigated as a fuel form for consuming plutonium and minor actinides, and for greater efficiency in uranium utilization. The result will thus be to drive TRISO particulate fuel to very high burn-ups. In the current effort the various phenomena in the TRISO particle are being modeled using a variety of techniques. The chemical behavior is being treated utilizing thermochemical analysis to identify phase formation/transformation and chemical activities in the particle, including kernel migration. First principles calculations are being used to investigate the critical issue of fission product palladium attack on the SiC coating layer. Density functional theory is being used to understand fission product diffusion within the plutonia oxide kernel. Kinetic Monte Carlo techniques are shedding light on transport of fission products, most notably silver, through the carbon and SiC coating layers. The diffusion of fission products through an alternative coating layer, ZrC, is being assessed via DFT methods. Finally, a multiscale approach is being used to understand thermal transport, including the effect of radiation damage induced defects, in a model SiC material.
Bogenschutz, Peter; Moeng, Chin-Hoh
2015-10-13
The PI’s at the National Center for Atmospheric Research (NCAR), Chin-Hoh Moeng and Peter Bogenschutz, have primarily focused their time on the implementation of the Simplified-Higher Order Turbulence Closure (SHOC; Bogenschutz and Krueger 2013) to the Multi-scale Modeling Framework (MMF) global model and testing of SHOC on deep convective cloud regimes.
Lunquist, K A; Chow, F K; Lundquist, J K; Mirocha, J D
2007-09-04
simulations, on the other hand, are performed by numerical weather prediction (NWP) codes, which cannot handle the geometry of the urban landscape, but do provide a more complete representation of atmospheric physics. NWP codes typically use structured grids with terrain-following vertical coordinates, include a full suite of atmospheric physics parameterizations, and allow for dynamic synoptic scale lateral forcing through grid nesting. Terrain following grids are unsuitable for urban terrain, as steep terrain gradients cause extreme distortion of the computational cells. In this work, we introduce and develop an immersed boundary method (IBM) to allow the favorable properties of a numerical weather prediction code to be combined with the ability to handle complex terrain. IBM uses a non-conforming structured grid, and allows solid boundaries to pass through the computational cells. As the terrain passes through the mesh in an arbitrary manner, the main goal of the IBM is to apply the boundary condition on the interior of the domain as accurately as possible. With the implementation of the IBM, numerical weather prediction codes can be used to explicitly resolve urban terrain. Heterogeneous urban domains using the IBM can be nested into larger mesoscale domains using a terrain-following coordinate. The larger mesoscale domain provides lateral boundary conditions to the urban domain with the correct forcing, allowing seamless integration between mesoscale and urban scale models. Further discussion of the scope of this project is given by Lundquist et al. [2007]. The current paper describes the implementation of an IBM into the Weather Research and Forecasting (WRF) model, which is an open source numerical weather prediction code. The WRF model solves the non-hydrostatic compressible Navier-Stokes equations, and employs an isobaric terrain-following vertical coordinate. Many types of IB methods have been developed by researchers; a comprehensive review can be found in Mittal
Theory and Modeling Capabilities | Argonne National Laboratory
U.S. Department of Energy (DOE) all webpages (Extended Search)
Theory and Modeling Capabilities Theory and multiscale computer simulations provide the interpretive and predictive framework to understand fundamental processes and to aid in the design of functional nanoscale systems. Our primary facility is a high-performance computing cluster accommodating parallel computer-intensive applications. Capabilities Carbon High-Performance Computing Cluster (3000 cores, 30 GPUs, ~30 TeraFLOPS) Development tools (GNU and Intel compilers and math libraries) Density
Augmenting epidemiological models with point-of-care diagnostics data
Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.; Ozmen, Ozgur
2016-04-20
Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less
Balcomb, J.D.
1981-01-01
Correlation methods have been developed to provide a quick and relatively simple technique for estimating the performance of passive solar systems. The correlations are done with respect to data generated from simulation models. The techniques and accuracies are described. Both the Solar Load Ratio and Un-Utilizability methods are described. The advantages and limitations of correlation methods as design tools are discussed.
V. Chipman
2002-10-31
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their postclosure analyses.
Physics-based multiscale coupling for full core nuclear reactor simulation
Gaston, Derek R.; Permann, Cody J.; Peterson, John W.; Slaughter, Andrew E.; Andrš, David; Wang, Yaqi; Short, Michael P.; Perez, Danielle M.; Tonks, Michael R.; Ortensi, Javier; et al
2015-10-01
Numerical simulation of nuclear reactors is a key technology in the quest for improvements in efficiency, safety, and reliability of both existing and future reactor designs. Historically, simulation of an entire reactor was accomplished by linking together multiple existing codes that each simulated a subset of the relevant multiphysics phenomena. Recent advances in the MOOSE (Multiphysics Object Oriented Simulation Environment) framework have enabled a new approach: multiple domain-specific applications, all built on the same software framework, are efficiently linked to create a cohesive application. This is accomplished with a flexible coupling capability that allows for a variety of different datamore » exchanges to occur simultaneously on high performance parallel computational hardware. Examples based on the KAIST-3A benchmark core, as well as a simplified Westinghouse AP-1000 configuration, demonstrate the power of this new framework for tackling—in a coupled, multiscale manner—crucial reactor phenomena such as CRUD-induced power shift and fuel shuffle. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license« less
Physics-based multiscale coupling for full core nuclear reactor simulation
Gaston, Derek R.; Permann, Cody J.; Peterson, John W.; Slaughter, Andrew E.; Andrš, David; Wang, Yaqi; Short, Michael P.; Perez, Danielle M.; Tonks, Michael R.; Ortensi, Javier; Zou, Ling; Martineau, Richard C.
2015-10-01
Numerical simulation of nuclear reactors is a key technology in the quest for improvements in efficiency, safety, and reliability of both existing and future reactor designs. Historically, simulation of an entire reactor was accomplished by linking together multiple existing codes that each simulated a subset of the relevant multiphysics phenomena. Recent advances in the MOOSE (Multiphysics Object Oriented Simulation Environment) framework have enabled a new approach: multiple domain-specific applications, all built on the same software framework, are efficiently linked to create a cohesive application. This is accomplished with a flexible coupling capability that allows for a variety of different data exchanges to occur simultaneously on high performance parallel computational hardware. Examples based on the KAIST-3A benchmark core, as well as a simplified Westinghouse AP-1000 configuration, demonstrate the power of this new framework for tackling—in a coupled, multiscale manner—crucial reactor phenomena such as CRUD-induced power shift and fuel shuffle. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license
Sullivan, P.; Eurek, K.; Margolis, R.
2014-07-01
Because solar power is a rapidly growing component of the electricity system, robust representations of solar technologies should be included in capacity-expansion models. This is a challenge because modeling the electricity system--and, in particular, modeling solar integration within that system--is a complex endeavor. This report highlights the major challenges of incorporating solar technologies into capacity-expansion models and shows examples of how specific models address those challenges. These challenges include modeling non-dispatchable technologies, determining which solar technologies to model, choosing a spatial resolution, incorporating a solar resource assessment, and accounting for solar generation variability and uncertainty.
Search for: All records | DOE PAGES
Office of Scientific and Technical Information (OSTI)
... limited understanding of process coupling combined with ... methods in which microscale and macroscale models are explicitly coupled in a single hybrid multiscale simulation. ...
Gehin, J.C.; Worley, B.A.; Renier, J.P.; Wemple, C.A.; Jahshan, S.N.; Ryskammp, J.M.
1995-08-01
This report summarizes the neutronics analysis performed during 1991 and 1992 in support of characterization of the conceptual design of the Advanced Neutron Source (ANS). The methods used in the analysis, parametric studies, and key results supporting the design and safety evaluations of the conceptual design are presented. The analysis approach used during the conceptual design phase followed the same approach used in early ANS evaluations: (1) a strong reliance on Monte Carlo theory for beginning-of-cycle reactor performance calculations and (2) a reliance on few-group diffusion theory for reactor fuel cycle analysis and for evaluation of reactor performance at specific time steps over the fuel cycle. The Monte Carlo analysis was carried out using the MCNP continuous-energy code, and the few- group diffusion theory calculations were performed using the VENTURE and PDQ code systems. The MCNP code was used primarily for its capability to model the reflector components in realistic geometries as well as the inherent circumvention of cross-section processing requirements and use of energy-collapsed cross sections. The MCNP code was used for evaluations of reflector component reactivity effects and of heat loads in these components. The code was also used as a benchmark comparison against the diffusion-theory estimates of key reactor parameters such as region fluxes, control rod worths, reactivity coefficients, and material worths. The VENTURE and PDQ codes were used to provide independent evaluations of burnup effects, power distributions, and small perturbation worths. The performance and safety calculations performed over the subject time period are summarized, and key results are provided. The key results include flux and power distributions over the fuel cycle, silicon production rates, fuel burnup rates, component reactivities, control rod worths, component heat loads, shutdown reactivity margins, reactivity coefficients, and isotope production rates.
Adaptive multi-grid method for a periodic heterogeneous medium in 1-D
Fish, J.; Belsky, V.
1995-12-31
A multi-grid method for a periodic heterogeneous medium in 1-D is presented. Based on the homogenization theory special intergrid connection operators have been developed to imitate a low frequency response of the differential equations with oscillatory coefficients. The proposed multi-grid method has been proved to have a fast rate of convergence governed by the ratio q/(4-q), where omultiscale computational scheme is developed. By this technique a computational model entirely constructed on the scale of material heterogeneity is only used where it is necessary to do so, or as indicated by so called Microscale Reduction Error (MRE) indicators, while in the remaining portion of the problem domain, the medium is treated as homogeneous with effective properties. Such a posteriori MRE indicators and estimators are developed on the basis of assessing the validity of two-scale asymptotic expansion.
Dynamics of a spherical particle in an acoustic field: A multiscale approach
Xie, Jin-Han, E-mail: J.H.Xie@ed.ac.uk; Vanneste, Jacques [School of Mathematics and Maxwell Institute for Mathematical Sciences, University of Edinburgh, Edinburgh EH9 3JZ (United Kingdom)
2014-10-15
A rigid spherical particle in an acoustic wave field oscillates at the wave period but has also a mean motion on a longer time scale. The dynamics of this mean motion is crucial for numerous applications of acoustic microfluidics, including particle manipulation and flow visualisation. It is controlled by four physical effects: acoustic (radiation) pressure, streaming, inertia, and viscous drag. In this paper, we carry out a systematic multiscale analysis of the problem in order to assess the relative importance of these effects depending on the parameters of the system that include wave amplitude, wavelength, sound speed, sphere radius, and viscosity. We identify two distinguished regimes characterised by a balance among three of the four effects, and we derive the equations that govern the mean particle motion in each regime. This recovers and organises classical results by King [On the acoustic radiation pressure on spheres, Proc. R. Soc. A 147, 212240 (1934)], Gor'kov [On the forces acting on a small particle in an acoustical field in an ideal fluid, Sov. Phys. 6, 773775 (1962)], and Doinikov [Acoustic radiation pressure on a rigid sphere in a viscous fluid, Proc. R. Soc. London A 447, 447466 (1994)], clarifies the range of validity of these results, and reveals a new nonlinear dynamical regime. In this regime, the mean motion of the particle remains intimately coupled to that of the surrounding fluid, and while viscosity affects the fluid motion, it plays no part in the acoustic pressure. Simplified equations, valid when only two physical effects control the particle motion, are also derived. They are used to obtain sufficient conditions for the particle to behave as a passive tracer of the Lagrangian-mean fluid motion.
Shi, Xing; Lin, Guang
2014-11-01
To model the sedimentation of the red blood cell (RBC) in a square duct and a circular pipe, the recently developed technique derived from the lattice Boltzmann method and the distributed Lagrange multiplier/fictitious domain method (LBM-DLM/FD) is extended to employ the mesoscopic network model for simulations of the sedimentation of the RBC in flow. The flow is simulated by the lattice Boltzmann method with a strong magnetic body force, while the network model is used for modeling RBC deformation. The fluid-RBC interactions are enforced by the Lagrange multiplier. The sedimentation of the RBC in a square duct and a circular pipe is simulated, revealing the capacity of the current method for modeling the sedimentation of RBC in various flows. Numerical results illustrate that that the terminal setting velocity increases with the increment of the exerted body force. The deformation of the RBC has significant effect on the terminal setting velocity due to the change of the frontal area. The larger the exerted force is, the smaller the frontal area and the larger deformation of the RBC are.
Comparison of up-scaling methods in poroelasticity and its generalizations
Berryman, J G
2003-12-13
Four methods of up-scaling coupled equations at the microscale to equations valid at the mesoscale and/or macroscale for fluid-saturated and partially saturated porous media will be discussed, compared, and contrasted. The four methods are: (1) effective medium theory, (2) mixture theory, (3) two-scale and multiscale homogenization, and (4) volume averaging. All these methods have advantages for some applications and disadvantages for others. For example, effective medium theory, mixture theory, and homogenization methods can all give formulas for coefficients in the up-scaled equations, whereas volume averaging methods give the form of the up-scaled equations but generally must be supplemented with physical arguments and/or data in order to determine the coefficients. Homogenization theory requires a great deal of mathematical insight from the user in order to choose appropriate scalings for use in the resulting power-law expansions, while volume averaging requires more physical insight to motivate the steps needed to find coefficients. Homogenization often is performed on periodic models, while volume averaging does not require any assumption of periodicity and can therefore be related very directly to laboratory and/or field measurements. Validity of the homogenization process is often limited to specific ranges of frequency - in order to justify the scaling hypotheses that must be made - and therefore cannot be used easily over wide ranges of frequency. However, volume averaging methods can quite easily be used for wide band data analysis. So, we learn from these comparisons that a researcher in the theory of poroelasticity and its generalizations needs to be conversant with two or more of these methods to solve problems generally.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Research Methods Research Methods being developed by MANTISSA Researchers Celeste: Bayesian modeling for astronomical images Randomized Linear Algebra for BioImaging Large-Scale PCA for Climate Efficient Graph Analytics for Genomics Unsupervised Learning for Neuroscience Deep Learning for Object Recognition Deep Learning for Daya Bay Unsupervised Learning in Neuroscience Last edited: 2016-07-18 16:52:3
Price, Phillip N.; Granderson, Jessica; Sohn, Michael; Addy, Nathan; Jump, David
2013-09-01
The overarching goal of this work is to advance the capabilities of technology evaluators in evaluating the building-level baseline modeling capabilities of Energy Management and Information System (EMIS) software. Through their customer engagement platforms and products, EMIS software products have the potential to produce whole-building energy savings through multiple strategies: building system operation improvements, equipment efficiency upgrades and replacements, and inducement of behavioral change among the occupants and operations personnel. Some offerings may also automate the quantification of whole-building energy savings, relative to a baseline period, using empirical models that relate energy consumption to key influencing parameters, such as ambient weather conditions and building operation schedule. These automated baseline models can be used to streamline the whole-building measurement and verification (M&V) process, and therefore are of critical importance in the context of multi-measure whole-building focused utility efficiency programs. This report documents the findings of a study that was conducted to begin answering critical questions regarding quantification of savings at the whole-building level, and the use of automated and commercial software tools. To evaluate the modeling capabilities of EMIS software particular to the use case of whole-building savings estimation, four research questions were addressed: 1. What is a general methodology that can be used to evaluate baseline model performance, both in terms of a) overall robustness, and b) relative to other models? 2. How can that general methodology be applied to evaluate proprietary models that are embedded in commercial EMIS tools? How might one handle practical issues associated with data security, intellectual property, appropriate testing ‘blinds’, and large data sets? 3. How can buildings be pre-screened to identify those that are the most model-predictable, and therefore those
M Ali, M. K. E-mail: eutoco@gmail.com; Ruslan, M. H. E-mail: eutoco@gmail.com; Muthuvalu, M. S. E-mail: jumat@ums.edu.my; Wong, J. E-mail: jumat@ums.edu.my; Sulaiman, J. E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md. E-mail: hafidzruslan@eng.ukm.my
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
Bernard, S.; Horsfield, B; Schultz, H; Schreiber, A; Wirth, R; Thi AnhVu, T; Perssen, F; Konitzer, S; Volk, H; et. al.
2010-01-01
Organic geochemical analyses, including solvent extraction or pyrolysis, followed by gas chromatography and mass spectrometry, are generally conducted on bulk gas shale samples to evaluate their source and reservoir properties. While organic petrology has been directed at unravelling the matrix composition and textures of these economically important unconventional resources, their spatial variability in chemistry and structure is still poorly documented at the sub-micrometre scale. Here, a combination of techniques including transmission electron microscopy and a synchrotron-based microscopy tool, scanning transmission X-ray microscopy, have been used to characterize at a multiple length scale an overmature organic-rich calcareous mudstone from northern Germany. We document multi-scale chemical and mineralogical heterogeneities within the sample, from the millimetre down to the nanometre-scale. From the detection of different types of bitumen and authigenic minerals associated with the organic matter, we show that the multi-scale approach used in this study may provide new insights into gaseous hydrocarbon generation/retention processes occurring within gas shales and may shed new light on their thermal history.
Fully implicit Particle-in-cell algorithms for multiscale plasma simulation
Chacon, Luis
2015-07-16
The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PIC only, reduced dimensionality). The approach is free of numerical instabilities: ω_{pe}Δt >> 1, and Δx >> λ_{D}. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N_{FE}, leading to an optimal algorithm.
An interface tracking model for droplet electrocoalescence.
Erickson, Lindsay Crowl
2013-09-01
This report describes an Early Career Laboratory Directed Research and Development (LDRD) project to develop an interface tracking model for droplet electrocoalescence. Many fluid-based technologies rely on electrical fields to control the motion of droplets, e.g. microfluidic devices for high-speed droplet sorting, solution separation for chemical detectors, and purification of biodiesel fuel. Precise control over droplets is crucial to these applications. However, electric fields can induce complex and unpredictable fluid dynamics. Recent experiments (Ristenpart et al. 2009) have demonstrated that oppositely charged droplets bounce rather than coalesce in the presence of strong electric fields. A transient aqueous bridge forms between approaching drops prior to pinch-off. This observation applies to many types of fluids, but neither theory nor experiments have been able to offer a satisfactory explanation. Analytic hydrodynamic approximations for interfaces become invalid near coalescence, and therefore detailed numerical simulations are necessary. This is a computationally challenging problem that involves tracking a moving interface and solving complex multi-physics and multi-scale dynamics, which are beyond the capabilities of most state-of-the-art simulations. An interface-tracking model for electro-coalescence can provide a new perspective to a variety of applications in which interfacial physics are coupled with electrodynamics, including electro-osmosis, fabrication of microelectronics, fuel atomization, oil dehydration, nuclear waste reprocessing and solution separation for chemical detectors. We present a conformal decomposition finite element (CDFEM) interface-tracking method for the electrohydrodynamics of two-phase flow to demonstrate electro-coalescence. CDFEM is a sharp interface method that decomposes elements along fluid-fluid boundaries and uses a level set function to represent the interface.
Robinson, Mark R.; Ward, Kenneth J.; Eaton, Robert P.; Haaland, David M.
1990-01-01
The characteristics of a biological fluid sample having an analyte are determined from a model constructed from plural known biological fluid samples. The model is a function of the concentration of materials in the known fluid samples as a function of absorption of wideband infrared energy. The wideband infrared energy is coupled to the analyte containing sample so there is differential absorption of the infrared energy as a function of the wavelength of the wideband infrared energy incident on the analyte containing sample. The differential absorption causes intensity variations of the infrared energy incident on the analyte containing sample as a function of sample wavelength of the energy, and concentration of the unknown analyte is determined from the thus-derived intensity variations of the infrared energy as a function of wavelength from the model absorption versus wavelength function.
Koteras, J.R.
1993-07-01
Tunnels buried deep within the earth constitute an important class geomechanics problems. Two numerical techniques used for the analysis of geomechanics problems, the finite element method and the boundary element method, have complementary characteristics for applications to problems of this type. The usefulness of combining these two methods for use as a geomechanics analysis tool has been recognized for some time, and a number of coupling techniques have been proposed. However, not all of them lend themselves to efficient computational implementations for large-scale problems. This report examines a coupling technique that can form the basis for an efficient analysis tool for large scale geomechanics problems through the use of an iterative equation solver.
Marzouk, Youssef; Fast P. (Lawrence Livermore National Laboratory, Livermore, CA); Kraus, M.; Ray, J. P.
2006-01-01
Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that these data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.
A minimum spanning forest based classification method for dedicated breast CT images
Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei
2015-11-15
Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting model used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging.
Wang, Ruofan; Wang, Jiang; Deng, Bin Liu, Chen; Wei, Xile; Tsang, K. M.; Chan, W. L.
2014-03-15
A combined method composing of the unscented Kalman filter (UKF) and the synchronization-based method is proposed for estimating electrophysiological variables and parameters of a thalamocortical (TC) neuron model, which is commonly used for studying Parkinson's disease for its relay role of connecting the basal ganglia and the cortex. In this work, we take into account the condition when only the time series of action potential with heavy noise are available. Numerical results demonstrate that not only this method can estimate model parameters from the extracted time series of action potential successfully but also the effect of its estimation is much better than the only use of the UKF or synchronization-based method, with a higher accuracy and a better robustness against noise, especially under the severe noise conditions. Considering the rather important role of TC neuron in the normal and pathological brain functions, the exploration of the method to estimate the critical parameters could have important implications for the study of its nonlinear dynamics and further treatment of Parkinson's disease.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Modeling & Analysis, News, News & Events, Photovoltaic, Renewable Energy, Research & Capabilities, Solar, Solar Newsletter, SunShot, Systems Analysis Sandia Develops Stochastic ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Monte Carlo modeling it was found that for noisy signals with a significant background component, accuracy is improved by fitting the total emission data which includes the...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Solar Sandia Labs Releases New Version of PVLib Toolbox Sandia has released version 1.3 of PVLib, its widely used Matlab toolbox for modeling photovoltaic (PV) power ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
... Sandia Will Host PV Bankability Workshop at Solar Power International (SPI) 2013 Computational Modeling & Simulation, Distribution Grid Integration, Energy, Facilities, Grid ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Science and Actuarial Practice" Read More Permalink New Project Is the ACME of Computer Science to Address Climate Change Analysis, Climate, Global Climate & Energy, Modeling, ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Though adequate for modeling mean transport, this approach does not address ... Microphysics such as diffusive transport and chemical kinetics are represented by ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
diffuse interface methods in the ALE-AMR code Wangyi Liu , John Barnard, Alex Friedman, Nathan Masters, Aaron Fisher, Velemir Mlaker, Alice Koniges, David Eder August...
U.S. Department of Energy (DOE) all webpages (Extended Search)
... Limited clean water supplies face further stress due to its required use in a number of industrial processes. Reverse osmosis (RO) is currently the best method of desalination ...
Moller, Peter; Ichikawa, Takatoshi
2015-12-23
In this study, we propose a method to calculate the two-dimensional (2D) fission-fragment yield Y(Z,N) versus both proton and neutron number, with inclusion of odd-even staggering effects in both variables. The approach is to use the Brownian shape-motion on a macroscopic-microscopic potential-energy surface which, for a particular compound system is calculated versus four shape variables: elongation (quadrupole moment Q2), neck d, left nascent fragment spheroidal deformation ϵf1, right nascent fragment deformation ϵf2 and two asymmetry variables, namely proton and neutron numbers in each of the two fragments. The extension of previous models 1) introduces a method to calculate this generalizedmore » potential-energy function and 2) allows the correlated transfer of nucleon pairs in one step, in addition to sequential transfer. In the previous version the potential energy was calculated as a function of Z and N of the compound system and its shape, including the asymmetry of the shape. We outline here how to generalize the model from the “compound-system” model to a model where the emerging fragment proton and neutron numbers also enter, over and above the compound system composition.« less
Nelson, A. J.; Cooper, G. W. [Department of Chemical and Nuclear Engineering, University of New Mexico, Albuquerque, New Mexico 87131 (United States); Ruiz, C. L.; Chandler, G. A.; Fehl, D. L.; Hahn, K. D.; Leeper, R. J.; Smelser, R.; Torres, J. A. [Sandia National Laboratories, Albuquerque, New Mexico 87185-1196 (United States)
2012-10-15
A novel method for modeling the neutron time of flight (nTOF) detector response in current mode for inertial confinement fusion experiments has been applied to the on-axis nTOF detectors located in the basement of the Z-Facility. It will be shown that this method can identify sources of neutron scattering, and is useful for predicting detector responses in future experimental configurations, and for identifying potential sources of neutron scattering when experimental set-ups change. This method can also provide insight on how much broadening neutron scattering contributes to the primary signals, which is then subtracted from them. Detector time responses are deconvolved from the signals, allowing a transformation from dN/dt to dN/dE, extracting neutron spectra at each detector location; these spectra are proportional to the absolute yield.
Singledecker, Steven J.; Jones, Scotty W.; Dorries, Alison M.; Henckel, George; Gruetzmacher, Kathleen M. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)
2012-07-01
In the coming fiscal years of potentially declining budgets, Department of Energy facilities such as the Los Alamos National Laboratory (LANL) will be looking to reduce the cost of radioactive waste characterization, management, and disposal processes. At the core of this cost reduction process will be choosing the most cost effective, efficient, and accurate methods of radioactive waste characterization. Central to every radioactive waste management program is an effective and accurate waste characterization program. Choosing between methods can determine what is classified as low level radioactive waste (LLRW), transuranic waste (TRU), waste that can be disposed of under an Authorized Release Limit (ARL), industrial waste, and waste that can be disposed of in municipal landfills. The cost benefits of an accurate radioactive waste characterization program cannot be overstated. In addition, inaccurate radioactive waste characterization of radioactive waste can result in the incorrect classification of radioactive waste leading to higher disposal costs, Department of Transportation (DOT) violations, Notice of Violations (NOVs) from Federal and State regulatory agencies, waste rejection from disposal facilities, loss of operational capabilities, and loss of disposal options. Any one of these events could result in the program that mischaracterized the waste losing its ability to perform it primary operational mission. Generators that produce radioactive waste have four characterization strategies at their disposal: - Acceptable Knowledge/Process Knowledge (AK/PK); - Indirect characterization using a software application or other dose to curie methodologies; - Non-Destructive Analysis (NDA) tools such as gamma spectroscopy; - Direct sampling (e.g. grab samples or Surface Contaminated Object smears) and laboratory analytical; Each method has specific advantages and disadvantages. This paper will evaluate each method detailing those advantages and disadvantages
A Nonlocal Peridynamic Plasticity Model for the Dynamic Flow and Fracture of Concrete.
Vogler, Tracy; Lammi, Christopher James
2014-10-01
A nonlocal, ordinary peridynamic constitutive model is formulated to numerically simulate the pressure-dependent flow and fracture of heterogeneous, quasi-brittle ma- terials, such as concrete. Classical mechanics and traditional computational modeling methods do not accurately model the distributed fracture observed within this family of materials. The peridynamic horizon, or range of influence, provides a characteristic length to the continuum and limits localization of fracture. Scaling laws are derived to relate the parameters of peridynamic constitutive model to the parameters of the classical Drucker-Prager plasticity model. Thermodynamic analysis of associated and non-associated plastic flow is performed. An implicit integration algorithm is formu- lated to calculate the accumulated plastic bond extension and force state. The gov- erning equations are linearized and the simulation of the quasi-static compression of a cylinder is compared to the classical theory. A dissipation-based peridynamic bond failure criteria is implemented to model fracture and the splitting of a concrete cylinder is numerically simulated. Finally, calculation of the impact and spallation of a con- crete structure is performed to assess the suitability of the material and failure models for simulating concrete during dynamic loadings. The peridynamic model is found to accurately simulate the inelastic deformation and fracture behavior of concrete during compression, splitting, and dynamically induced spall. The work expands the types of materials that can be modeled using peridynamics. A multi-scale methodology for simulating concrete to be used in conjunction with the plasticity model is presented. The work was funded by LDRD 158806.
U.S. Department of Energy (DOE) all webpages (Extended Search)
with application in modeling NDCX-II experiments Wangyi Liu 1 , John Barnard 2 , Alex Friedman 2 , Nathan Masters 2 , Aaron Fisher 2 , Alice Koniges 2 , David Eder 2 1 LBNL, USA, 2...
U.S. Department of Energy (DOE) all webpages (Extended Search)
NASA Earth at Night Video EC, Energy, Energy Efficiency, Global, Modeling, News & Events, Solid-State Lighting, Videos NASA Earth at Night Video Have you ever wondered what the ...
Bishop, R. F.; Li, P. H. Y.; Campbell, C. E.
2014-10-15
We outline how the coupled cluster method of microscopic quantum many-body theory can be utilized in practice to give highly accurate results for the ground-state properties of a wide variety of highly frustrated and strongly correlated spin-lattice models of interest in quantum magnetism, including their quantum phase transitions. The method itself is described, and it is shown how it may be implemented in practice to high orders in a systematically improvable hierarchy of (so-called LSUBm) approximations, by the use of computer-algebraic techniques. The method works from the outset in the thermodynamic limit of an infinite lattice at all levels of approximation, and it is shown both how the 'raw' LSUBm results are themselves generally excellent in the sense that they converge rapidly, and how they may accurately be extrapolated to the exact limit, m ? ?, of the truncation index m, which denotes the only approximation made. All of this is illustrated via a specific application to a two-dimensional, frustrated, spin-half J{sub 1}{sup XXZ}?J{sub 2}{sup XXZ} model on a honeycomb lattice with nearest-neighbor and next-nearest-neighbor interactions with exchange couplings J{sub 1} > 0 and J{sub 2} ? ?J{sub 1} > 0, respectively, where both interactions are of the same anisotropic XXZ type. We show how the method can be used to determine the entire zero-temperature ground-state phase diagram of the model in the range 0 ? ? ? 1 of the frustration parameter and 0 ? ? ? 1 of the spin-space anisotropy parameter. In particular, we identify a candidate quantum spin-liquid region in the phase space.
Zachara, John M.; Bjornstad, Bruce N.; Christensen, John N.; Conrad, Mark S.; Fredrickson, Jim K.; Freshley, Mark D.; Haggerty, Roy; Hammond, Glenn E.; Kent, Douglas B.; Konopka, Allan; Lichtner, Peter C.; Liu, Chongxuan; McKinley, James P.; Murray, Christopher J.; Rockhold, Mark L.; Rubin, Yoram; Vermeul, Vincent R.; Versteeg, Roelof J.; Zheng, Chunmiao
2012-03-05
The Integrated Field Research Challenge (IFRC) at the Hanford Site 300 Area uranium (U) plume addresses multi-scale mass transfer processes in a complex subsurface biogeochemical setting where groundwater and riverwater interact. A series of forefront science questions on reactive mass transfer motivates research. These questions relate to the effect of spatial heterogeneities; the importance of scale; coupled interactions between biogeochemical, hydrologic, and mass transfer processes; and measurements and approaches needed to characterize and model a mass-transfer dominated biogeochemical system. The project was initiated in February 2007, with CY 2007, CY 2008, CY 2009, and CY 2010 progress summarized in preceding reports. A project peer review was held in March 2010, and the IFRC project acted upon all suggestions and recommendations made in consequence by reviewers and SBR/DOE. These responses have included the development of 'Modeling' and 'Well-Field Mitigation' plans that are now posted on the Hanford IFRC web-site, and modifications to the IFRC well-field completed in CY 2011. The site has 35 instrumented wells, and an extensive monitoring system. It includes a deep borehole for microbiologic and biogeochemical research that sampled the entire thickness of the unconfined 300 A aquifer. Significant, impactful progress has been made in CY 2011 including: (i) well modifications to eliminate well-bore flows, (ii) hydrologic testing of the modified well-field and upper aquifer, (iii) geophysical monitoring of winter precipitation infiltration through the U-contaminated vadose zone and spring river water intrusion to the IFRC, (iv) injection experimentation to probe the lower vadose zone and to evaluate the transport behavior of high U concentrations, (v) extended passive monitoring during the period of water table rise and fall, and (vi) collaborative down-hole experimentation with the PNNL SFA on the biogeochemistry of the 300 A Hanford-Ringold contact and the
Final Technical Report -- Bridging the PSI Knowledge Gap: A Multiscale Approach
Whyte, Dennis
2014-12-12
The Plasma Surface Interactions (PSI) Science Center formed by the grant undertook a multidisciplinary set of studies on the complex interface between the plasma and solid states of matter. The strategy of the center was to combine and integrate the experimental, diagnostic and modeling toolkits from multiple institutions towards specific PSI problems. In this way the Center could tackle integrated science issues which were not addressable by single institutions, as well as evolve the underlying science of the PSI in a more general way than just for fusion applications. The overall strategy proved very successful. The research result and highlights of the MIT portion of the Center are primarily described. A particular highlight is the study of tungsten nano-tendril growth in the presence of helium plasmas. The Center research provided valuable new insights to the mechanisms controlling the nano-tendrils by developing coupled modeling and in situ diagnostic methods which could be directly compared. For example, the role of helium accumulation in tungsten distortion in the surface was followed with unique in situ helium concentration diagnostics developed. These depth-profiled, time-resolved helium concentration measurements continue to challenge the numerical models of nano-tendrils. The Center team also combined its expertise on tungsten nano-tendrils to demonstrate for the first time the growth of the tendrils in a fusion environment on the Alcator C-Mod fusion experiment, thus having significant impact on the broader fusion research effort. A new form of isolated nano-tendril “columns” were identified which are now being used to understand the underlying mechanisms controlling the tendril growth. The Center also advanced PSI science on a broader front with a particular emphasis on developing a wide range of in situ PSI diagnostic tools at the DIONISOS facility at MIT. For example the strong suppression of sputtering by the certain combination of light
Gettelman, Andrew
2015-10-01
In this project we have been upgrading the Multiscale Modeling Framework (MMF) in the Community Atmosphere Model (CAM), also known as Super-Parameterized CAM (SP-CAM). This has included a major effort to update the coding standards and interface with CAM so that it can be placed on the main development trunk. It has also included development of a new software structure for CAM to be able to handle sub-grid column information. These efforts have formed the major thrust of the work.
Albanese, A.; Bhagat, N.; Friend, L.; Lamontagne, J.; Pouder, R.; Vinjamuri, G.
1980-03-01
The use of coal as a source of high Btu gas is currently viewed as one possible means of supplementing dwindling natural gas supplies. While certain coal gasification processes have demonstrated technical feasibility, much uncertainty and inconsistency remains regarding the capital and operating costs of large scale coal conversion facilities; cost estimates may vary by as much as 50%. Studies conducted for the American Gas Association (AGA) and US Energy Research and Development Administration by C.F. Braun and Co. have defined technical specifications and cost guidelines for estimating costs of coal gasification technologies (AGA Guidelines). Based on the AGA Guidelines, Braun has also prepared cost estimates for selected coal gasification processes. Recent efforts by International Research and Technology Inc. (IR and T) have led to development of the Materials-Process-Product Model (MPPM), a comprehensive anaytic tool for evaluation of processes and costs for coal gasification and other coal conversion technologies. This validation of the MPPM presents a comparison of engineering and cost computation methodologies employed in the MPPM to those employed by Braun and comparison of MPPM results to Braun cost estimates. These comparisons indicate that the MPPM has the potential to be a valuable tool for assisting in the evaluation of coal gasification technologies.
Zachara, John M.; Bjornstad, Bruce N.; Christensen, John N.; Conrad, Mark E.; Fredrickson, Jim K.; Freshley, Mark D.; Haggerty, Roy; Hammon, Glenn; Kent, Douglas B.; Konopka, Allan; Lichtner, Peter C.; Liu, Chongxuan; McKinley, James P.; Murray, Christopher J.; Rockhold, Mark L.; Rubin, Yoram; Vermeul, Vincent R.; Versteeg, Roelof J.; Ward, Anderson L.; Zheng, Chunmiao
2010-02-01
The Integrated Field-Scale Subsurface Research Challenge (IFRC) at the Hanford Site 300 Area uranium (U) plume addresses multi-scale mass transfer processes in a complex hydrogeologic setting where groundwater and riverwater interact. A series of forefront science questions on mass transfer are posed for research which relate to the effect of spatial heterogeneities; the importance of scale; coupled interactions between biogeochemical, hydrologic, and mass transfer processes; and measurements and approaches needed to characterize and model a mass-transfer dominated system. The project was initiated in February 2007, with CY 2007 and CY 2008 progress summarized in preceding reports. The site has 35 instrumented wells, and an extensive monitoring system. It includes a deep borehole for microbiologic and biogeochemical research that sampled the entire thickness of the unconfined 300 A aquifer. Significant, impactful progress has been made in CY 2009 with completion of extensive laboratory measurements on field sediments, field hydrologic and geophysical characterization, four field experiments, and modeling. The laboratory characterization results are being subjected to geostatistical analyses to develop spatial heterogeneity models of U concentration and chemical, physical, and hydrologic properties needed for reactive transport modeling. The field experiments focused on: (1) physical characterization of the groundwater flow field during a period of stable hydrologic conditions in early spring, (2) comprehensive groundwater monitoring during spring to characterize the release of U(VI) from the lower vadose zone to the aquifer during water table rise and fall, (3) dynamic geophysical monitoring of salt-plume migration during summer, and (4) a U reactive tracer experiment (desorption) during the fall. Geophysical characterization of the well field was completed using the down-well Electrical Resistance Tomography (ERT) array, with results subjected to robust
Development of mpi_EPIC Model for Global Agroecosystem Modeling
Kang, Shujiang; Wang, Dali; Nichols, Jeff A. {Cyber Sciences}; Schuchart, Joseph; Kline, Keith L; Wei, Yaxing; Ricciuto, Daniel M; Wullschleger, Stan D; Post, Wilfred M; Izaurralde, Dr. R. Cesar
2015-01-01
Models that address policy-maker concerns about multi-scale effects of food and bioenergy production systems are computationally demanding. We integrated the message passing interface algorithm into the process-based EPIC model to accelerate computation of ecosystem effects. Simulation performance was further enhanced by applying the Vampir framework. When this enhanced mpi_EPIC model was tested, total execution time for a global 30-year simulation of a switchgrass cropping system was shortened to less than 0.5 hours on a supercomputer. The results illustrate that mpi_EPIC using parallel design can balance simulation workloads and facilitate large-scale, high-resolution analysis of agricultural production systems, management alternatives and environmental effects.
Development of mpi_EPIC model for global agroecosystem modeling
Kang, Shujiang; Wang, Dali; Jeff A. Nichols; Schuchart, Joseph; Kline, Keith L.; Wei, Yaxing; Ricciuto, Daniel M.; Wullschleger, Stan D.; Post, Wilfred M.; Izaurralde, R. Cesar
2014-12-31
Models that address policy-maker concerns about multi-scale effects of food and bioenergy production systems are computationally demanding. We integrated the message passing interface algorithm into the process-based EPIC model to accelerate computation of ecosystem effects. Simulation performance was further enhanced by applying the Vampir framework. When this enhanced mpi_EPIC model was tested, total execution time for a global 30-year simulation of a switchgrass cropping system was shortened to less than 0.5 hours on a supercomputer. The results illustrate that mpi_EPIC using parallel design can balance simulation workloads and facilitate large-scale, high-resolution analysis of agricultural production systems, management alternatives and environmentalmore » effects.« less
Zachara, John M.; Bjornstad, Bruce N.; Christensen, John N.; Conrad, Mark S.; Fredrickson, Jim K.; Freshley, Mark D.; Haggerty, Roy; Hammond, Glenn E.; Kent, Douglas B.; Konopka, Allan; Lichtner, Peter C.; Liu, Chongxuan; McKinley, James P.; Murray, Christopher J.; Rockhold, Mark L.; Rubin, Yoram; Vermeul, Vincent R.; Versteeg, Roelof J.; Ward, Anderson L.; Zheng, Chunmiao
2011-02-01
The Integrated Field Research Challenge (IFRC) at the Hanford Site 300 Area uranium (U) plume addresses multi-scale mass transfer processes in a complex subsurface hydrogeologic setting where groundwater and riverwater interact. A series of forefront science questions on reactive mass transfer focus research. These questions relate to the effect of spatial heterogeneities; the importance of scale; coupled interactions between biogeochemical, hydrologic, and mass transfer processes; and measurements and approaches needed to characterize and model a mass-transfer dominated system. The project was initiated in February 2007, with CY 2007, CY 2008, and CY 2009 progress summarized in preceding reports. A project peer review was held in March 2010, and the IFRC project has responded to all suggestions and recommendations made in consequence by reviewers and SBR/DOE. These responses have included the development of “Modeling” and “Well-Field Mitigation” plans that are now posted on the Hanford IFRC web-site. The site has 35 instrumented wells, and an extensive monitoring system. It includes a deep borehole for microbiologic and biogeochemical research that sampled the entire thickness of the unconfined 300 A aquifer. Significant, impactful progress has been made in CY 2010 including the quantification of well-bore flows in the fully screened wells and the testing of means to mitigate them; the development of site geostatistical models of hydrologic and geochemical properties including the distribution of U; developing and parameterizing a reactive transport model of the smear zone that supplies contaminant U to the groundwater plume; performance of a second passive experiment of the spring water table rise and fall event with a associated multi-point tracer test; performance of downhole biogeochemical experiments where colonization substrates and discrete water and gas samplers were deployed to the lower aquifer zone; and modeling of past injection experiments for
Ardham, Vikram Reddy; Leroy, Frédéric E-mail: f.leroy@theo.chemie.tu-darmstadt.de; Deichmann, Gregor; Vegt, Nico F. A. van der E-mail: f.leroy@theo.chemie.tu-darmstadt.de
2015-12-28
We address the question of how reducing the number of degrees of freedom modifies the interfacial thermodynamic properties of heterogeneous solid-liquid systems. We consider the example of n-hexane interacting with multi-layer graphene which we model both with fully atomistic and coarse-grained (CG) models. The CG models are obtained by means of the conditional reversible work (CRW) method. The interfacial thermodynamics of these models is characterized by the solid-liquid work of adhesion W{sub SL} calculated by means of the dry-surface methodology through molecular dynamics simulations. We find that the CRW potentials lead to values of W{sub SL} that are larger than the atomistic ones. Clear understanding of the relationship between the structure of n-hexane in the vicinity of the surface and W{sub SL} is elucidated through a detailed study of the energy and entropy components of W{sub SL}. We highlight the crucial role played by the solid-liquid energy fluctuations. Our approach suggests that CG potentials should be designed in such a way that they preserve the range of solid-liquid interaction energies, but also their fluctuations in order to preserve the reference atomistic value of W{sub SL}. Our study thus opens perspectives into deriving CG interaction potentials that preserve the thermodynamics of solid-liquid contacts and will find application in studies that intend to address materials driven by interfaces.
Presentation given by NREL at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about significant enhancement of computational...
Presentation given by National Renewable Energy Laboratory at 2015 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about...
Park, Sungsu
2015-11-29
Originally, the main role of the P.I. (Sungsu Park) in this project was to improve the treatment of cloud microphysics in the CAM5 shallow and deep convection scheme. During the progress of the project, however, the main research theme was changed to develop a new unified convection scheme (so called, UNICON) with the permission of the program manager.
Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro
2015-05-01
This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).
This document is a pre-publication Federal Register final rule regarding alternative efficiency determination methods, basic model definition, and compliance for commercial HVAC, refrigeration, and water heating equipment , as issued by the Deputy Assistant Secretary for Energy Efficiency on December 22, 2014. Though it is not intended or expected, should any discrepancy occur between the document posted here and the document published in the Federal Register, the Federal Register publication controls. This document is being made available through the Internet solely as a means to facilitate the public's access to this document.
Simulation of macromolecule self-assembly in solution: A multiscale approach
Lavino, Alessio D. Barresi, Antonello A. Marchisio, Daniele L.; Pasquale, Nicodemo di; Carbone, Paola
2015-12-17
One of the most common processes to produce polymer nanoparticles is to induce self-assembly by using the solvent-displacement method, in which the polymer is dissolved in a “good” solvent and the solution is then mixed with an “anti-solvent”. The polymer ability to self-assemble in solution is therefore determined by its structural and transport properties in solutions of the pure solvents and at the intermediate compositions. In this work, we focus on poly-ε-caprolactone (PCL) which is a biocompatible polymer that finds widespread application in the pharmaceutical and biomedical fields, performing simulation at three different scales using three different computational tools: full atomistic molecular dynamics (MD), population balance modeling (PBM) and computational fluid dynamics (CFD). Simulations consider PCL chains of different molecular weight in solution of pure acetone (good solvent), of pure water (anti-solvent) and their mixtures, and mixing at different rates and initial concentrations in a confined impinging jets mixer (CIJM). Our MD simulations reveal that the nano-structuring of one of the solvents in the mixture leads to an unexpected identical polymer structure irrespectively of the concentration of the two solvents. In particular, although in pure solvents the behavior of the polymer is, as expected, very different, at intermediate compositions, the PCL chain shows properties very similar to those found in pure acetone as a result of the clustering of the acetone molecules in the vicinity of the polymer chain. We derive an analytical expression to predict the polymer structural properties in solution at different solvent compositions and use it to formulate an aggregation kernel to describe the self-assembly in the CIJM via PBM and CFD. Simulations are eventually validated against experiments.
Loth, E.; Tryggvason, G.; Tsuji, Y.; Elghobashi, S. E.; Crowe, Clayton T.; Berlemont, A.; Reeks, M.; Simonin, O.; Frank, Th; Onishi, Yasuo; Van Wachem, B.
2005-09-01
Slurry flows occur in many circumstances, including chemical manufacturing processes, pipeline transfer of coal, sand, and minerals; mud flows; and disposal of dredged materials. In this section we discuss slurry flow applications related to radioactive waste management. The Hanford tank waste solids and interstitial liquids will be mixed to form a slurry so it can be pumped out for retrieval and treatment. The waste is very complex chemically and physically. The ARIEL code is used to model the chemical interactions and fluid dynamics of the waste.
V. Chipman
2002-10-05
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. The purposes of Revision 01 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post
Kohut, Sviataslau V.; Staroverov, Viktor N.; Ryabinkin, Ilya G.
2014-05-14
We describe a method for constructing a hierarchy of model potentials approximating the functional derivative of a given orbital-dependent exchange-correlation functional with respect to electron density. Each model is derived by assuming a particular relationship between the self-consistent solutions of KohnSham (KS) and generalized KohnSham (GKS) equations for the same functional. In the KS scheme, the functional is differentiated with respect to density, in the GKS schemewith respect to orbitals. The lowest-level approximation is the orbital-averaged effective potential (OAEP) built with the GKS orbitals. The second-level approximation, termed the orbital-consistent effective potential (OCEP), is based on the assumption that the KS and GKS orbitals are the same. It has the form of the OAEP plus a correction term. The highest-level approximation is the density-consistent effective potential (DCEP), derived under the assumption that the KS and GKS electron densities are equal. The analytic expression for a DCEP is the OCEP formula augmented with kinetic-energy-density-dependent terms. In the case of exact-exchange functional, the OAEP is the Slater potential, the OCEP is roughly equivalent to the localized HartreeFock approximation and related models, and the DCEP is practically indistinguishable from the true optimized effective potential for exact exchange. All three levels of the proposed hierarchy require solutions of the GKS equations as input and have the same affordable computational cost.
Wilkowski, Gery M.; Rudland, David L.; Shim, Do-Jun; Brust, Frederick W.; Babu, Sundarsanam
2008-06-30
The potential to save trillions of BTU’s in energy usage and billions of dollars in cost on an annual basis based on use of higher strength steel in major oil and gas transmission pipeline construction is a compelling opportunity recognized by both the US Department of Energy (DOE). The use of high-strength steels (X100) is expected to result in energy savings across the spectrum, from manufacturing the pipe to transportation and fabrication, including welding of line pipe. Elementary examples of energy savings include more the 25 trillion BTUs saved annually based on lower energy costs to produce the thinner-walled high-strength steel pipe, with the potential for the US part of the Alaskan pipeline alone saving more than 7 trillion BTU in production and much more in transportation and assembling. Annual production, maintenance and installation of just US domestic transmission pipeline is likely to save 5 to 10 times this amount based on current planned and anticipated expansions of oil and gas lines in North America. Among the most important conclusions from these studies were: • While computational weld models to predict residual stress and distortions are well-established and accurate, related microstructure models need improvement. • Fracture Initiation Transition Temperature (FITT) Master Curve properly predicts surface-cracked pipe brittle-to-ductile initiation temperature. It has value in developing Codes and Standards to better correlate full-scale behavior from either CTOD or Charpy test results with the proper temperature shifts from the FITT master curve method. • For stress-based flaw evaluation criteria, the new circumferentially cracked pipe limit-load solution in the 2007 API 1104 Appendix A approach is overly conservative by a factor of 4/π, which has additional implications. . • For strain-based design of girth weld defects, the hoop stress effect is the most significant parameter impacting CTOD-driving force and can increase the crack
Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.
2001-01-16
Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of
Jung, Jae Won; Kim, Jong Oh; Yeo, Inhwan Jason; Cho, Young-Bin; Kim, Sun Mo; DiBiase, Steven
2012-12-15
Purpose: Fast and accurate transit portal dosimetry was investigated by developing a density-scaled layer model of electronic portal imaging device (EPID) and applying it to a clinical environment. Methods: The model was developed for fast Monte Carlo dose calculation. The model was validated through comparison with measurements of dose on EPID using first open beams of varying field sizes under a 20-cm-thick flat phantom. After this basic validation, the model was further tested by applying it to transit dosimetry and dose reconstruction that employed our predetermined dose-response-based algorithm developed earlier. The application employed clinical intensity-modulated beams irradiated on a Rando phantom. The clinical beams were obtained through planning on pelvic regions of the Rando phantom simulating prostate and large pelvis intensity modulated radiation therapy. To enhance agreement between calculations and measurements of dose near penumbral regions, convolution conversion of acquired EPID images was alternatively used. In addition, thickness-dependent image-to-dose calibration factors were generated through measurements of image and calculations of dose in EPID through flat phantoms of various thicknesses. The factors were used to convert acquired images in EPID into dose. Results: For open beam measurements, the model showed agreement with measurements in dose difference better than 2% across open fields. For tests with a Rando phantom, the transit dosimetry measurements were compared with forwardly calculated doses in EPID showing gamma pass rates between 90.8% and 98.8% given 4.5 mm distance-to-agreement (DTA) and 3% dose difference (DD) for all individual beams tried in this study. The reconstructed dose in the phantom was compared with forwardly calculated doses showing pass rates between 93.3% and 100% in isocentric perpendicular planes to the beam direction given 3 mm DTA and 3% DD for all beams. On isocentric axial planes, the pass rates varied
Anovitz, Lawrence M.; Cole, David R.; Jackson, Andrew J.; Rother, Gernot; Littrell, Kenneth C.; Allard, Lawrence F.; Pollington, Anthony D.; Wesolowski, David J.
2015-06-01
We have performed a series of experiments to understand the effects of quartz overgrowths on nanometer to centimeter scale pore structures of sandstones. Blocks from two samples of St. Peter Sandstone with different initial porosities (5.8 and 18.3%) were reacted from 3 days to 7.5 months at 100 and 200 °C in aqueous solutions supersaturated with respect to quartz by reaction with amorphous silica. Porosity in the resultant samples was analyzed using small and ultrasmall angle neutron scattering and scanning electron microscope/backscattered electron (SEM/BSE)-based image-scale processing techniques.Significant changes were observed in the multiscale pore structures. By three days much of the overgrowth in the low-porosity sample dissolved away. The reason for this is uncertain, but the overgrowths can be clearly distinguished from the original core grains in the BSE images. At longer times the larger pores are observed to fill with plate-like precipitates. As with the unreacted sandstones, porosity is a step function of size. Grain boundaries are typically fractal, but no evidence of mass fractal or fuzzy interface behavior was observed suggesting a structural difference between chemical and clastic sediments. After the initial loss of the overgrowths, image scale porosity (>~1 cm) decreases with time. Submicron porosity (typically ~25% of the total) is relatively constant or slightly decreasing in absolute terms, but the percent change is significant. Fractal dimensions decrease at larger scales, and increase at smaller scales with increased precipitation.
Collinson, Glyn A.; Dorelli, John C.; Moore, Thomas E.; Pollock, Craig; Mariano, Al; Shappirio, Mark D.; Adrian, Mark L.; Avanov, Levon A.; Lewis, Gethyn R.; Kataria, Dhiren O.; Bedington, Robert; Owen, Christopher J.; Walsh, Andrew P.; Arridge, Chris S.; Gliese, Ulrik; Barrie, Alexander C.; Tucker, Corey
2012-03-15
We report our findings comparing the geometric factor (GF) as determined from simulations and laboratory measurements of the new Dual Electron Spectrometer (DES) being developed at NASA Goddard Space Flight Center as part of the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission. Particle simulations are increasingly playing an essential role in the design and calibration of electrostatic analyzers, facilitating the identification and mitigation of the many sources of systematic error present in laboratory calibration. While equations for laboratory measurement of the GF have been described in the literature, these are not directly applicable to simulation since the two are carried out under substantially different assumptions and conditions, making direct comparison very challenging. Starting from first principles, we derive generalized expressions for the determination of the GF in simulation and laboratory, and discuss how we have estimated errors in both cases. Finally, we apply these equations to the new DES instrument and show that the results agree within errors. Thus we show that the techniques presented here will produce consistent results between laboratory and simulation, and present the first description of the performance of the new DES instrument in the literature.
Anovitz, Lawrence M.; Cole, David R.; Jackson, Andrew J.; Rother, Gernot; Littrell, Kenneth C.; Allard, Lawrence F.; Pollington, Anthony D.; Wesolowski, David J.
2015-06-01
We have performed a series of experiments to understand the effects of quartz overgrowths on nanometer to centimeter scale pore structures of sandstones. Blocks from two samples of St. Peter Sandstone with different initial porosities (5.8 and 18.3%) were reacted from 3 days to 7.5 months at 100 and 200 C in aqueous solutions supersaturated with respect to quartz by reaction with amorphous silica. Porosity in the resultant samples was analyzed using small and ultrasmall angle neutron scattering and scanning electron microscope/backscattered electron (SEM/BSE)-based image-scale processing techniques.Significant changes were observed in the multiscale pore structures. By three days much of the overgrowth in the low-porosity sample dissolved away. The reason for this is uncertain, but the overgrowths can be clearly distinguished from the original core grains in the BSE images. At longer times the larger pores are observed to fill with plate-like precipitates. As with the unreacted sandstones, porosity is a step function of size. Grain boundaries are typically fractal, but no evidence of mass fractal or fuzzy interface behavior was observed suggesting a structural difference between chemical and clastic sediments. After the initial loss of the overgrowths, image scale porosity (>~1 cm) decreases with time. Submicron porosity (typically ~25% of the total) is relatively constant or slightly decreasing in absolute terms, but the percent change is significant. Fractal dimensions decrease at larger scales, and increase at smaller scales with increased precipitation.
Continuum-kinetic-microscopic model of lung clearance due to core-annular fluid entrainment
Mitran, Sorin
2013-07-01
The human lung is protected against aspirated infectious and toxic agents by a thin liquid layer lining the interior of the airways. This airway surface liquid is a bilayer composed of a viscoelastic mucus layer supported by a fluid film known as the periciliary liquid. The viscoelastic behavior of the mucus layer is principally due to long-chain polymers known as mucins. The airway surface liquid is cleared from the lung by ciliary transport, surface tension gradients, and airflow shear forces. This work presents a multiscale model of the effect of airflow shear forces, as exerted by tidal breathing and cough, upon clearance. The composition of the mucus layer is complex and variable in time. To avoid the restrictions imposed by adopting a viscoelastic flow model of limited validity, a multiscale computational model is introduced in which the continuum-level properties of the airway surface liquid are determined by microscopic simulation of long-chain polymers. A bridge between microscopic and continuum levels is constructed through a kinetic-level probability density function describing polymer chain configurations. The overall multiscale framework is especially suited to biological problems due to the flexibility afforded in specifying microscopic constituents, and examining the effects of various constituents upon overall mucus transport at the continuum scale.
Khachatryan, V.
2015-06-09
A search for a standard model Higgs boson produced in association with a top-quark pair and decaying to bottom quarks is presented. Events with hadronic jets and one or two oppositely charged leptons are selected from a data sample corresponding to an integrated luminosity of 19.5fb-1 collected by the CMS experiment at the LHC in pp collisions at a centre-of-mass energy of 8TeV. In order to separate the signal from the larger tt + jets background, this analysis uses a matrix element method that assigns a probability density value to each reconstructed event under signal or background hypotheses. The ratiomorebetween the two values is used in a maximum likelihood fit to extract the signal yield. The results are presented in terms of the measured signal strength modifier, ?, relative to the standard model prediction for a Higgs boson mass of 125GeV. The observed (expected) exclusion limit at a 95 % confidence level is ?+1.6-1.5.less
Khachatryan, Vardan
2015-06-09
A search for a standard model Higgs boson produced in association with a top-quark pair and decaying to bottom quarks is presented. Events with hadronic jets and one or two oppositely charged leptons are selected from a data sample corresponding to an integrated luminosity of 19.5fb-1 collected by the CMS experiment at the LHC in pp collisions at a centre-of-mass energy of 8TeV. In order to separate the signal from the larger tt¯ + jets background, this analysis uses a matrix element method that assigns a probability density value to each reconstructed event under signal or background hypotheses. The ratiomore » between the two values is used in a maximum likelihood fit to extract the signal yield. The results are presented in terms of the measured signal strength modifier, μ, relative to the standard model prediction for a Higgs boson mass of 125GeV. The observed (expected) exclusion limit at a 95 % confidence level is μ < 4.2 (3.3), corresponding to a best fit value μ^ = 1.2+1.6-1.5.« less
Khachatryan, Vardan
2015-06-09
A search for a standard model Higgs boson produced in association with a top-quark pair and decaying to bottom quarks is presented. Events with hadronic jets and one or two oppositely charged leptons are selected from a data sample corresponding to an integrated luminosity of 19.5fb^{-1} collected by the CMS experiment at the LHC in pp collisions at a centre-of-mass energy of 8TeV. In order to separate the signal from the larger tt¯ + jets background, this analysis uses a matrix element method that assigns a probability density value to each reconstructed event under signal or background hypotheses. The ratio between the two values is used in a maximum likelihood fit to extract the signal yield. The results are presented in terms of the measured signal strength modifier, μ, relative to the standard model prediction for a Higgs boson mass of 125GeV. The observed (expected) exclusion limit at a 95 % confidence level is μ < 4.2 (3.3), corresponding to a best fit value μ^ = 1.2^{+1.6}_{-1.5}.
Khachatryan, V.
2015-06-09
A search for a standard model Higgs boson produced in association with a top-quark pair and decaying to bottom quarks is presented. Events with hadronic jets and one or two oppositely charged leptons are selected from a data sample corresponding to an integrated luminosity of 19.5fb^{-1} collected by the CMS experiment at the LHC in pp collisions at a centre-of-mass energy of 8TeV. In order to separate the signal from the larger tt + jets background, this analysis uses a matrix element method that assigns a probability density value to each reconstructed event under signal or background hypotheses. The ratio between the two values is used in a maximum likelihood fit to extract the signal yield. The results are presented in terms of the measured signal strength modifier, ?, relative to the standard model prediction for a Higgs boson mass of 125GeV. The observed (expected) exclusion limit at a 95 % confidence level is ?<4.2 (3.3), corresponding to a best fit value ?^=1.2^{+1.6}_{-1.5}.
Multiscale simulation of xenon diffusion and grain boundary segregation in UO₂
Andersson, David A.; Tonks, Michael R.; Casillas, Luis; Vyas, Shyam; Nerikar, Pankaj; Uberuaga, Blas P.; Stanek, Christopher R.
2015-07-01
In light water reactor fuel, gaseous fission products segregate to grain boundaries, resulting in the nucleation and growth of large intergranular fission gas bubbles. The segregation rate is controlled by diffusion of fission gas atoms through the grains and interaction with the boundaries. Based on the mechanisms established from earlier density functional theory (DFT) and empirical potential calculations, diffusion models for xenon (Xe), uranium (U) vacancies and U interstitials in UO₂ have been derived for both intrinsic (no irradiation) and irradiation conditions. Segregation of Xe to grain boundaries is described by combining the bulk diffusion model with a model formore » the interaction between Xe atoms and three different grain boundaries in UO₂ (Σ5 tilt, Σ5 twist and a high angle random boundary), as derived from atomistic calculations. The present model does not attempt to capture nucleation or growth of fission gas bubbles at the grain boundaries. The point defect and Xe diffusion and segregation models are implemented in the MARMOT phase field code, which is used to calculate effective Xe and U diffusivities as well as to simulate Xe redistribution for a few simple microstructures.« less
Henson, Kriste M; Gou; ias, Konstadinos G
2010-11-30
The ability to transfer national travel patterns to a local population is of interest when attempting to model megaregions or areas that exceed metropolitan planning organization (MPO) boundaries. At the core of this research are questions about the connection between travel behavior and land use, urban form, and accessibility. As a part of this process, a group of land use variables have been identified to define activity and travel patterns for individuals and households. The 2001 National Household Travel Survey (NHTS) participants are divided into categories comprised of a set of latent cluster models representing persons, travel, and land use. These are compared to two sets of cluster models constructed for two local travel surveys. Comparison of means statistical tests are used to assess differences among sociodemographic groups residing in localities with similar land uses. The results show that the NHTS and the local surveys share mean population activity and travel characteristics. However, these similarities mask behavioral heterogeneity that are shown when distributions of activity and travel behavior are examined. Therefore, data from a national household travel survey cannot be used to model local population travel characteristics if the goal to model the actual distributions and not mean travel behavior characteristics.
Multiscale simulation of xenon diffusion and grain boundary segregation in UO₂
Andersson, David A.; Tonks, Michael R.; Casillas, Luis; Vyas, Shyam; Nerikar, Pankaj; Uberuaga, Blas P.; Stanek, Christopher R.
2015-07-01
In light water reactor fuel, gaseous fission products segregate to grain boundaries, resulting in the nucleation and growth of large intergranular fission gas bubbles. The segregation rate is controlled by diffusion of fission gas atoms through the grains and interaction with the boundaries. Based on the mechanisms established from earlier density functional theory (DFT) and empirical potential calculations, diffusion models for xenon (Xe), uranium (U) vacancies and U interstitials in UO₂ have been derived for both intrinsic (no irradiation) and irradiation conditions. Segregation of Xe to grain boundaries is described by combining the bulk diffusion model with a model for the interaction between Xe atoms and three different grain boundaries in UO₂ (Σ5 tilt, Σ5 twist and a high angle random boundary), as derived from atomistic calculations. The present model does not attempt to capture nucleation or growth of fission gas bubbles at the grain boundaries. The point defect and Xe diffusion and segregation models are implemented in the MARMOT phase field code, which is used to calculate effective Xe and U diffusivities as well as to simulate Xe redistribution for a few simple microstructures.
Office of Scientific and Technical Information (OSTI)
... These variations in distance models support our motivation for developing an unsupervised method. To determine whether the observed differences among species reflect actual ...
Miner, Nadine E.; Caudell, Thomas P.
2004-06-08
A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.
Rivard, MJ; Ghadyani, HR; Bastien, AD; Lutz, NN; Hepel, JT
2015-06-15
Purpose: Noninvasive image-guided breast brachytherapy delivers conformal HDR Ir-192 brachytherapy treatments with the breast compressed, and treated in the cranial-caudal and medial-lateral directions. This technique subjects breast tissue to extreme deformations not observed for other disease sites. Given that, commercially-available software for deformable image registration cannot accurately co-register image sets obtained in these two states, a finite element analysis based on a biomechanical model was developed to deform dose distributions for each compression circumstance for dose summation. Methods: The model assumed the breast was under planar stress with values of 30 kPa for Young’s modulus and 0.3 for Poisson’s ratio. Dose distributions from round and skin-dose optimized applicators in cranial-caudal and medial-lateral compressions were deformed using 0.1 cm planar resolution. Dose distributions, skin doses, and dose-volume histograms were generated. Results were examined as a function of breast thickness, applicator size, target size, and offset distance from the center. Results: Over the range of examined thicknesses, target size increased several millimeters as compression thickness decreased. This trend increased with increasing offset distances. Applicator size minimally affected target coverage, until applicator size was less than the compressed target size. In all cases, with an applicator larger or equal to the compressed target size, > 90% of the target covered by > 90% of the prescription dose. In all cases, dose coverage became less uniform as offset distance increased and average dose increased. This effect was more pronounced for smaller target-applicator combinations. Conclusions: The model exhibited skin dose trends that matched MC-generated benchmarking results and clinical measurements within 2% over a similar range of breast thicknesses and target sizes. The model provided quantitative insight on dosimetric treatment variables over
Multi-scale and Multi-phase deformation of crystalline materials
Energy Science and Technology Software Center (OSTI)
2007-12-01
The MDEF package contains capabilities ofr modeling the deformation of materials at the crystal scale. Primary code capabilities are: xoth "strength" and "equation of state" aspects of material response, post-processing utilities, utilities for comparing results with data from diffraction experiments.
Localized Scale Coupling and New Educational Paradigms in Multiscale Mathematics and Science
Ingber, Marc; Vorobieff, Peter
2014-03-14
We have experimentally demonstrated how microscale phenomena affect suspended particle behavior on the mesoscale, and how particle group behavior on the mesoscale influences the macroscale suspension behavior. Semi-analytical and numerical methods to treat flows on different scales have been developed, and a framework to combine these scale-dependent treatment has been described.
Anovitz, Lawrence M.; Cole, David R.; Jackson, Andrew J.; Rother, Gernot; Littrell, Kenneth C.; Allard, Lawrence F.; Pollington, Anthony D.; Wesolowski, David J.
2015-06-01
We have performed a series of experiments to understand the effects of quartz overgrowths on nanometer to centimeter scale pore structures of sandstones. Blocks from two samples of St. Peter Sandstone with different initial porosities (5.8 and 18.3%) were reacted from 3 days to 7.5 months at 100 and 200 °C in aqueous solutions supersaturated with respect to quartz by reaction with amorphous silica. Porosity in the resultant samples was analyzed using small and ultrasmall angle neutron scattering and scanning electron microscope/backscattered electron (SEM/BSE)-based image-scale processing techniques.Significant changes were observed in the multiscale pore structures. By three days much ofmore » the overgrowth in the low-porosity sample dissolved away. The reason for this is uncertain, but the overgrowths can be clearly distinguished from the original core grains in the BSE images. At longer times the larger pores are observed to fill with plate-like precipitates. As with the unreacted sandstones, porosity is a step function of size. Grain boundaries are typically fractal, but no evidence of mass fractal or fuzzy interface behavior was observed suggesting a structural difference between chemical and clastic sediments. After the initial loss of the overgrowths, image scale porosity (>~1 cm) decreases with time. Submicron porosity (typically ~25% of the total) is relatively constant or slightly decreasing in absolute terms, but the percent change is significant. Fractal dimensions decrease at larger scales, and increase at smaller scales with increased precipitation.« less
Not Available
1994-10-01
A handbook on ``A Method for the Assessment of Site-specific Econoomic Impacts of Industrial and Commercial Biomass Energy Facilities`` has been prepared by Resource Systems Group Inc. under contract to the Southeastern Regional Biomass Energy Program (SERBEP). The handbook includes a user-friendly Lotus 123 spreadsheet which calculates the economic impacts of biomass energy facilities. The analysis uses a hybrid approach, combining direct site-specific data provided by the user, with indirect impact multipliers from the US Forest Service IMPLAN input/output model for each state. Direct economic impacts are determined primarily from site-specific data and indirect impacts are determined from the IMPLAN multipliers. The economic impacts are given in terms of income, employment, and state and federal taxes generated directly by the specific facility and by the indirect economic activity associated with each project. A worksheet is provided which guides the user in identifying and entering the appropriate financial data on the plant to be evaluated. The WLAN multipliers for each state are included in a database within the program. The multipliers are applied automatically after the user has entered the site-specific data and the state in which the facility is located. Output from the analysis includes a summary of direct and indirect income, employment and taxes. Case studies of large and small wood energy facilities and an ethanol plant are provided as examples to demonstrate the method. Although the handbook and program are intended for use by those with no previous experience in economic impact analysis, suggestions are given for the more experienced user who may wish to modify the analysis techniques.
Rajaram, Harihar; Brutz, Michael; Klein, Dylan R; Mallikamas, Wasin
2014-09-18
at different scales, and track transport across fracture-matrix interfaces based on rigorous local approximations to the transport equations. This modeling approach can incorporate aperture variability, multi-scale preferential flow and matrix heterogeneity. We developed efficient particle-tracking methods for handling matrix diffusion and adsorption on fracture walls and demonstrated their efficiency for use within the context of large-scale complex fracture network models with variability in apertures across a network of fractures and within individual fractures.
Accelerated Cartesian expansions for the rapid solution of periodic multiscale problems
Baczewski, Andrew David; Dault, Daniel L.; Shanker, Balasubramaniam
2012-07-03
We present an algorithm for the fast and efficient solution of integral equations that arise in the analysis of scattering from periodic arrays of PEC objects, such as multiband frequency selective surfaces (FSS) or metamaterial structures. Our approach relies upon the method of Accelerated Cartesian Expansions (ACE) to rapidly evaluate the requisite potential integrals. ACE is analogous to FMM in that it can be used to accelerate the matrix vector product used in the solution of systems discretized using MoM. Here, ACE provides linear scaling in both CPU time and memory. Details regarding the implementation of this method within themore » context of periodic systems are provided, as well as results that establish error convergence and scalability. In addition, we also demonstrate the applicability of this algorithm by studying several exemplary electrically dense systems.« less
Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec; Peng, Ivy Bo; Laure, Erwin; Markidis, Stefano
2015-06-01
A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instabilitymore » test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.« less
Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec; Peng, Ivy Bo; Laure, Erwin; Markidis, Stefano
2015-06-01
A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instability test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.
Unwin, Stephen D.; Sadovsky, Artyom; Sullivan, E. C.; Anderson, Richard M.
2011-09-30
This white paper accompanies a demonstration model that implements methods for the risk-informed design of monitoring, verification and accounting (RI-MVA) systems in geologic carbon sequestration projects. The intent is that this model will ultimately be integrated with, or interfaced with, the National Risk Assessment Partnership (NRAP) integrated assessment model (IAM). The RI-MVA methods described here apply optimization techniques in the analytical environment of NRAP risk profiles to allow systematic identification and comparison of the risk and cost attributes of MVA design options.
Yortsos, Yanis C.
2002-10-08
In this report, the thrust areas include the following: Internal drives, vapor-liquid flows, combustion and reaction processes, fluid displacements and the effect of instabilities and heterogeneities and the flow of fluids with yield stress. These find respective applications in foamy oils, the evolution of dissolved gas, internal steam drives, the mechanics of concurrent and countercurrent vapor-liquid flows, associated with thermal methods and steam injection, such as SAGD, the in-situ combustion, the upscaling of displacements in heterogeneous media and the flow of foams, Bingham plastics and heavy oils in porous media and the development of wormholes during cold production.
Michael Tonks; Derek Gaston; Cody Permann; Paul Millett; Glen Hansen; Chris Newman
2009-08-01
Reactor fuel performance is sensitive to microstructure changes during irradiation (such as fission gas and pore formation). This study proposes an approach to capture microstructural changes in the fuel by a two-way coupling of a mesoscale phase field irradiation model to an engineering scale, finite element calculation. This work solves the multiphysics equation system at the engineering-scale in a parallel, fully-coupled, fully-implicit manner using a preconditioned Jacobian-free Newton Krylov method (JFNK). A sampling of the temperature at the Gauss points of the coarse scale is passed to a parallel sequence of mesoscale calculations within the JFNK function evaluation phase of the calculation. The mesoscale thermal conductivity is calculated in parallel, and the result is passed back to the engineering-scale calculation. As this algorithm is fully contained within the JFNK function evaluation, the mesoscale calculation is nonlinearly consistent with the engineering-scale calculation. Further, the action of the Jacobian is also consistent, so the composite algorithm provides the strong nonlinear convergence properties of Newton's method. The coupled model using INL's \\bison\\ code demonstrates quadratic nonlinear convergence and good parallel scalability. Initial results predict the formation of large pores in the hotter center of the pellet, but few pores on the outer circumference. Thus, the thermal conductivity is is reduced in the center of the pellet, leading to a higher internal temperature than that in an unirradiated pellet.
Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis
Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; Gur, Sourav; Danielson, Thomas L.; Hin, Celine N.; Pannala, Sreekanth; Frantziskonis, George N.
2016-01-28
We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less
Michael V. Glazoff
2014-10-01
In the post-Fukushima world, the stability of materials under extreme conditions is an important issue for the safety of nuclear reactors. Because the nuclear industry is going to continue using advanced zirconium cladding materials in the foreseeable future, it become critical to gain fundamental understanding of the several interconnected problems. First, what are the thermodynamic and kinetic factors affecting the oxidation and hydrogen pick-up by these materials at normal, off-normal conditions, and in long-term storage? Secondly, what protective coatings (if any) could be used in order to gain extremely valuable time at off-normal conditions, e.g., when temperature exceeds the critical value of 2200°F? Thirdly, the kinetics of oxidation of such protective coating or braiding needs to be quantified. Lastly, even if some degree of success is achieved along this path, it is absolutely critical to have automated inspection algorithms allowing identifying defects of cladding as soon as possible. This work strives to explore these interconnected factors from the most advanced computational perspective, utilizing such modern techniques as first-principles atomistic simulations, computational thermodynamics of materials, diffusion modeling, and the morphological algorithms of image processing for defect identification. Consequently, it consists of the four parts dealing with these four problem areas preceded by the introduction and formulation of the studied problems. In the 1st part an effort was made to employ computational thermodynamics and ab initio calculations to shed light upon the different stages of oxidation of ziraloys (2 and 4), the role of microstructure optimization in increasing their thermal stability, and the process of hydrogen pick-up, both in normal working conditions and in long-term storage. The 2nd part deals with the need to understand the influence and respective roles of the two different plasticity mechanisms in Zr nuclear alloys: twinning
Smith, Jovanca J.; Bishop, Joseph E.
2013-11-01
This report summarizes the work performed by the graduate student Jovanca Smith during a summer internship in the summer of 2012 with the aid of mentor Joe Bishop. The projects were a two-part endeavor that focused on the use of the numerical model called the Lattice Discrete Particle Model (LDPM). The LDPM is a discrete meso-scale model currently used at Northwestern University and the ERDC to model the heterogeneous quasi-brittle material, concrete. In the first part of the project, LDPM was compared to the Karagozian and Case Concrete Model (K&C) used in Presto, an explicit dynamics finite-element code, developed at Sandia National Laboratories. In order to make this comparison, a series of quasi-static numerical experiments were performed, namely unconfined uniaxial compression tests on four varied cube specimen sizes, three-point bending notched experiments on three proportional specimen sizes, and six triaxial compression tests on a cylindrical specimen. The second part of this project focused on the application of LDPM to simulate projectile perforation on an ultra high performance concrete called CORTUF. This application illustrates the strengths of LDPM over traditional continuum models.
V. Chipman; J. Case
2002-12-20
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. Revision 01 ICN 01 included the results of the unqualified software code MULTIFLUX to assess the influence of moisture on the ventilation efficiency. The purposes of Revision 02 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of
Resolution enhancement of lung 4D-CT data using multiscale interphase iterative nonlocal means
Zhang Yu; Yap, Pew-Thian; Wu Guorong; Feng Qianjin; Chen Wufan; Lian Jun; Shen Dinggang
2013-05-15
Purpose: Four-dimensional computer tomography (4D-CT) has been widely used in lung cancer radiotherapy due to its capability in providing important tumor motion information. However, the prolonged scanning duration required by 4D-CT causes considerable increase in radiation dose. To minimize the radiation-related health risk, radiation dose is often reduced at the expense of interslice spatial resolution. However, inadequate resolution in 4D-CT causes artifacts and increases uncertainty in tumor localization, which eventually results in extra damages of healthy tissues during radiotherapy. In this paper, the authors propose a novel postprocessing algorithm to enhance the resolution of lung 4D-CT data. Methods: The authors' premise is that anatomical information missing in one phase can be recovered from the complementary information embedded in other phases. The authors employ a patch-based mechanism to propagate information across phases for the reconstruction of intermediate slices in the longitudinal direction, where resolution is normally the lowest. Specifically, the structurally matching and spatially nearby patches are combined for reconstruction of each patch. For greater sensitivity to anatomical details, the authors employ a quad-tree technique to adaptively partition the image for more fine-grained refinement. The authors further devise an iterative strategy for significant enhancement of anatomical details. Results: The authors evaluated their algorithm using a publicly available lung data that consist of 10 4D-CT cases. The authors' algorithm gives very promising results with significantly enhanced image structures and much less artifacts. Quantitative analysis shows that the authors' algorithm increases peak signal-to-noise ratio by 3-4 dB and the structural similarity index by 3%-5% when compared with the standard interpolation-based algorithms. Conclusions: The authors have developed a new algorithm to improve the resolution of 4D-CT. It outperforms
Final Report: Geoelectrical Measurement of Multi-Scale Mass Transfer Parameters
Haggerty, Roy; Day-Lewis, Fred; Singha, Kamini; Johnson, Timothy; Binley, Andrew; Lane, John
2014-03-20
Mass transfer affects contaminant transport and is thought to control the efficiency of aquifer remediation at a number of sites within the Department of Energy (DOE) complex. An improved understanding of mass transfer is critical to meeting the enormous scientific and engineering challenges currently facing DOE. Informed design of site remedies and long-term stewardship of radionuclide-contaminated sites will require new cost-effective laboratory and field techniques to measure the parameters controlling mass transfer spatially and across a range of scales. In this project, we sought to capitalize on the geophysical signatures of mass transfer. Previous numerical modeling and pilot-scale field experiments suggested that mass transfer produces a geoelectrical signature—a hysteretic relation between sampled (mobile-domain) fluid conductivity and bulk (mobile + immobile) conductivity—over a range of scales relevant to aquifer remediation. In this work, we investigated the geoelectrical signature of mass transfer during tracer transport in a series of controlled experiments to determine the operation of controlling parameters, and also investigated the use of complex-resistivity (CR) as a means of quantifying mass transfer parameters in situ without tracer experiments. In an add-on component to our grant, we additionally considered nuclear magnetic resonance (NMR) to help parse mobile from immobile porosities. Including the NMR component, our revised study objectives were to: 1. Develop and demonstrate geophysical approaches to measure mass-transfer parameters spatially and over a range of scales, including the combination of electrical resistivity monitoring, tracer tests, complex resistivity, nuclear magnetic resonance, and materials characterization; and 2. Provide mass-transfer estimates for improved understanding of contaminant fate and transport at DOE sites, such as uranium transport at the Hanford 300 Area. To achieve our objectives, we implemented a 3
Li, Dongsheng; Zbib, Hussein M.; Garmestani, Hamid; Sun, Xin; Khaleel, Mohammad A.
2011-07-01
Stainless steels based on Fe-Cr-Ni alloys are the most popular structural materials used in reactors. High energy particle irradiation of in this kind of polycrystalline structural materials usually produces irradiation hardening and embrittlement. The development of predictive capability for the influence of irradiation on mechanical behavior is very important in materials design for next-generation reactors. Irradiation hardening is related to structural information crossing different length scale, such as composition, dislocation, crystal orientation distribution and so on. To predict the effective hardening, the influence factors along different length scales should be considered. A multiscale approach was implemented in this work to predict irradiation hardening of iron based structural materials. Three length scales are involved in this multiscale model: nanometer, micrometer and millimeter. In the microscale, molecular dynamics (MD) was utilized to predict on the edge dislocation mobility in body centered cubic (bcc) Fe and its Ni and Cr alloys. On the mesoscale, dislocation dynamics (DD) models were used to predict the critical resolved shear stress from the evolution of local dislocation and defects. In the macroscale, a viscoplastic self-consistent (VPSC) model was applied to predict the irradiation hardening in samples with changes in texture. The effects of defect density and texture were investigated. Simulated evolution of yield strength with irradiation agrees well with the experimental data of irradiation strengthening of stainless steel 304L, 316L and T91. This multiscale model we developed in this project can provide a guidance tool in performance evaluation of structural materials for next-generation nuclear reactors. Combining with other tools developed in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the models developed will have more impact in improving the reliability of current reactors and affordability of new
Energy Science and Technology Software Center (OSTI)
2014-06-25
PIKA is a MOOSE-based application for modeling micro-structure evolution of seasonal snow. The model will be useful for environmental, atmospheric, and climate scientists. Possible applications include application to energy balance models, ice sheet modeling, and avalanche forecasting. The model implements physics from published, peer-reviewed articles. The main purpose is to foster university and laboratory collaboration to build a larger multi-scale snow model using MOOSE. The main feature of the code is that it is implementedmore » using the MOOSE framework, thus making features such as multiphysics coupling, adaptive mesh refinement, and parallel scalability native to the application. PIKA implements three equations: the phase-field equation for tracking the evolution of the ice-air interface within seasonal snow at the grain-scale; the heat equation for computing the temperature of both the ice and air within the snow; and the mass transport equation for monitoring the diffusion of water vapor in the pore space of the snow.« less
Viral kinetic modeling: state of the art
Canini, Laetitia; Perelson, Alan S.
2014-06-25
Viral kinetic modeling has led to increased understanding of the within host dynamics of viral infections and the effects of therapy. Here we review recent developments in the modeling of viral infection kinetics with emphasis on two infectious diseases: hepatitis C and influenza. We review how viral kinetic modeling has evolved from simple models of viral infections treated with a drug or drug cocktail with an assumed constant effectiveness to models that incorporate drug pharmacokinetics and pharmacodynamics, as well as phenomenological models that simply assume drugs have time varying-effectiveness. We also discuss multiscale models that include intracellular events in viralmore » replication, models of drug-resistance, models that include innate and adaptive immune responses and models that incorporate cell-to-cell spread of infection. Overall, viral kinetic modeling has provided new insights into the understanding of the disease progression and the modes of action of several drugs. In conclusion, we expect that viral kinetic modeling will be increasingly used in the coming years to optimize drug regimens in order to improve therapeutic outcomes and treatment tolerability for infectious diseases.« less
Nichols, John W.; Schultz, Irv R.; Fitzsimmons, Patrick N..
2006-06-10
Mammalian researchers have developed a stepwise approach to predict in vivo hepatic clearance from measurements of in vitro hepatic metabolism. The resulting clearance estimates have been used to screen drug candidates, identify potential drug-drug interactions, investigate idiosyncratic drug responses, and support toxicology risk assessments. In this report we review these methods, discuss their potential application to studies with fish, and describe how extrapolated values could be incorporated into well-known compartmental kinetic models. Empirical equations that relate extrapolation factors to chemical log Kow are given to facilitate the incorporation of metabolism data into bioconcentration and bioaccumulation models. Because they explicitly incorporate the concept of clearance, compartmental clearance volume models are particularly well suited for incorporating hepatic clearance estimates. The manner in which these clearance values are incorporated into a given model depends, however, on the measurement frame of reference. Procedures for the incorporation of in vitro metabolism data into physiologically based toxicokinetic (PBTK) models are also described. Unlike most compartmental models, PBTK models are developed to describe the effects of metabolism in the tissue where it occurs. In addition, PBTK models are well suited to modeling metabolism in more than one tissue.
Please join us for a triple-header seminar organized around Modeling RNA
U.S. Department of Energy (DOE) all webpages (Extended Search)
and Protein/RNA Complexes | Stanford Synchrotron Radiation Lightsource Please join us for a triple-header seminar organized around Modeling RNA and Protein/RNA Complexes Tuesday, November 13, 2012 - 11:15am SSRL, Bldg. 137-322 Speakers: Julie Bernauer, Debanu Das & Dimitar Pachov Program Description: 11:15-11:45 Julie Bernauer (INRIA AMIB Bioinfo) Multi-scale modeling for RNA structures: a challenge 11:45-12:00 Debanu Das (SSRL JSCG) Progress on HT-SB of Protein/Nucleic Acid complexes at
Hoak, T.E. |; Sundberg, K.R.; Ortoleva, P.
1998-12-31
The analysis carried out in the Chemical Interaction of Rocks and Fluids Basin (CIRFB) model describes the chemical and physical evolution of the entire system. One aspect of this is the deformation of the rocks, and its treatment with a rigorous flow and rheological model. This type of analysis depends on knowing the state of the model domain`s boundaries as functions of time. In the Andrews and Ector County areas of the Central Basin Platform of West Texas, the authors calculate this shortening with a simple interpretation of the basic motion and a restoration of the Ellenburger formation. Despite its simplicity, this calculation reveals two distinct periods of shortening/extension, a relatively uniform directionality to all the deformation, and the localization of deformation effects to the immediate vicinities of the major faults in the area. Conclusions are drawn regarding the appropriate expressions of these boundary conditions in the CIRFB model and possible implications for exploration.
Palmer, D.L.
1986-09-01
This thesis predicted the airborne spatial distribution of a high-explosive-generated dust cloud. A comparison of predicted cloud center positions to experimental data collected from an aircraft flying through the dust cloud center at various times and altitudes was also studied. The analysis was accomplished using a model called AFGL which produces global complex spectral coefficients. Spectral coefficients were applied as inp fallout prediction model (called REDRAM) to predict dust mass/cu. m.
Nelson, Alan J.; Cooper, Gary Wayne; Ruiz, Carlos L.; Chandler, Gordon Andrew; Fehl, David Lee; Hahn, Kelly Denise; Leeper, Ramon Joe; Smelser, Ruth Marie; Torres, Jose A.
2013-09-01
There are several machines in this country that produce short bursts of neutrons for various applications. A few examples are the Zmachine, operated by Sandia National Laboratories in Albuquerque, NM; the OMEGA Laser Facility at the University of Rochester in Rochester, NY; and the National Ignition Facility (NIF) operated by the Department of Energy at Lawrence Livermore National Laboratory in Livermore, California. They all incorporate neutron time of flight (nTOF) detectors which measure neutron yield, and the shapes of the waveforms from these detectors contain germane information about the plasma conditions that produce the neutrons. However, the signals can also be %E2%80%9Cclouded%E2%80%9D by a certain fraction of neutrons that scatter off structural components and also arrive at the detectors, thereby making analysis of the plasma conditions more difficult. These detectors operate in current mode - i.e., they have no discrimination, and all the photomultiplier anode charges are integrated rather than counted individually as they are in single event counting. Up to now, there has not been a method for modeling an nTOF detector operating in current mode. MCNPPoliMiwas developed in 2002 to simulate neutron and gammaray detection in a plastic scintillator, which produces a collision data output table about each neutron and photon interaction occurring within the scintillator; however, the postprocessing code which accompanies MCNPPoliMi assumes a detector operating in singleevent counting mode and not current mode. Therefore, the idea for this work had been born: could a new postprocessing code be written to simulate an nTOF detector operating in current mode? And if so, could this process be used to address such issues as the impact of neutron scattering on the primary signal? Also, could it possibly even identify sources of scattering (i.e., structural materials) that
Pore-Scale and Multiscale Numerical Simulation of Flow and Transport in a Laboratory-Scale Column
Scheibe, Timothy D.; Perkins, William A.; Richmond, Marshall C.; McKinley, Matthey I.; Romero Gomez, Pedro DJ; Oostrom, Martinus; Wietsma, Thomas W.; Serkowski, John A.; Zachara, John M.
2015-02-01
Pore-scale models are useful for studying relationships between fundamental processes and phenomena at larger (i.e., Darcy) scales. However, the size of domains that can be simulated with explicit pore-scale resolution is limited by computational and observational constraints. Direct numerical simulation of pore-scale flow and transport is typically performed on millimeter-scale volumes at which X-ray computed tomography (XCT), often used to characterize pore geometry, can achieve micrometer resolution. In contrast, the scale at which a continuum approximation of a porous medium is valid is usually larger, on the order of centimeters to decimeters. Furthermore, laboratory experiments that measure continuum properties are typically performed on decimeter-scale columns. At this scale, XCT resolution is coarse (tens to hundreds of micrometers) and prohibits characterization of small pores and grains. We performed simulations of pore-scale processes over a decimeter-scale volume of natural porous media with a wide range of grain sizes, and compared to results of column experiments using the same sample. Simulations were conducted using high-performance codes executed on a supercomputer. Two approaches to XCT image segmentation were evaluated, a binary (pores and solids) segmentation and a ternary segmentation that resolved a third category (porous solids with pores smaller than the imaged resolution). We used a mixed Stokes-Darcy simulation method to simulate the combination of Stokes flow in large open pores and Darcy-like flow in porous solid regions. Simulations based on the ternary segmentation provided results that were consistent with experimental observations, demonstrating our ability to successfully model pore-scale flow over a column-scale domain.
Local timespace mesh refinement for simulation of elastic wave propagation in multi-scale media
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-15
This paper presents an original approach to local timespace grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are the application of temporal and spatial refinement on two different surfaces; the use of the embedded-stencil technique for the refinement of grid step with respect to time; the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
Catalysis Center for Energy Innovation KEY ACCOMPLISHMENTS AND CORE CAPABILITIES
U.S. Department of Energy (DOE) all webpages (Extended Search)
KEY ACCOMPLISHMENTS AND CORE CAPABILITIES CCEI 1 TABLE OF CONTENTS Introduction and Overview of Discoveries and Breakthroughs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Core Capabilities: Multiscale Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Solution-phase Chemistry with Accelerated Molecular Dynamics Methods . . . . . . . . . . . . . . . . . .
Bledsoe, Keith C.
2015-04-01
The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratorys INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric. This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.
Weyand, John D.
1986-01-01
A method of making a region exhibiting a range of compositions, comprising plasma spraying various compositions on top of one another onto a base.
Weyand, J.D.
1986-11-18
A method is disclosed of making a region exhibiting a range of compositions, comprising plasma spraying various compositions on top of one another onto a base. 2 figs.
Callaway, J.M.
1982-08-01
Alternative methods for quantifying the economic impacts associated with future increases in the ambient concentration of CO/sub 2/ were examined. A literature search was undertaken, both to gain a better understanding of the ways in which CO/sub 2/ buildup could affect crop growth and to identify the different methods available for assessing the impacts of CO/sub 2/-induced environmental changes on crop yields. The second task involved identifying the scope of both the direct and indirect economic impacts that could occur as a result of CO/sub 2/-induced changes in crop yields. The third task then consisted of a comprehensive literature search to identify what types of economic models could be used effectively to assess the kinds of direct and indirect economic impacts that could conceivably occur as a result of CO/sub 2/ buildup. Specific attention was focused upon national and multi-regional agricultural sector models, multi-country agricultural trade models, and macroeconomic models of the US economy. The fourth and final task of this research involved synthesizing the information gathered in the previous tasks into a systematic framework for assessing the direct and indirect economic impacts of CO/sub 2/-induced environmental changes related to agricultural production.
Scale and the representation of human agency in the modeling of agroecosystems
Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.; Absar, Syeda Mariya; Nair, Sujithkumar Surendran; Parish, Esther S.
2015-07-17
Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capture different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.
CHARACTERIZING COMPLEXITY IN SOLAR MAGNETOGRAM DATA USING A WAVELET-BASED SEGMENTATION METHOD
Kestener, P.; Khalil, A.; Arneodo, A.
2010-07-10
The multifractal nature of solar photospheric magnetic structures is studied using the two-dimensional wavelet transform modulus maxima (WTMM) method. This relies on computing partition functions from the wavelet transform skeleton defined by the WTMM method. This skeleton provides an adaptive space-scale partition of the fractal distribution under study, from which one can extract the multifractal singularity spectrum. We describe the implementation of a multiscale image processing segmentation procedure based on the partitioning of the WT skeleton, which allows the disentangling of the information concerning the multifractal properties of active regions from the surrounding quiet-Sun field. The quiet Sun exhibits an average Hoelder exponent {approx}-0.75, with observed multifractal properties due to the supergranular structure. On the other hand, active region multifractal spectra exhibit an average Hoelder exponent {approx}0.38, similar to those found when studying experimental data from turbulent flows.
Donets, Sergii; Pershin, Anton; Baeurle, Stephan A.
2015-05-14
Both the device composition and fabrication process are well-known to crucially affect the power conversion efficiency of polymer solar cells. Major advances have recently been achieved through the development of novel device materials and inkjet printing technologies, which permit to improve their durability and performance considerably. In this work, we demonstrate the usefulness of a recently developed field-based multiscale solar-cell algorithm to investigate the influence of the material characteristics, like, e.g., electrode surfaces, polymer architectures, and impurities in the active layer, as well as post-production treatments, like, e.g., electric field alignment, on the photovoltaic performance of block-copolymer solar-cell devices. Our study reveals that a short exposition time of the polymer bulk heterojunction to the action of an external electric field can lead to a low photovoltaic performance due to an incomplete alignment process, leading to undulated or disrupted nanophases. With increasing exposition time, the nanophases align in direction to the electric field lines, resulting in an increase of the number of continuous percolation paths and, ultimately, in a reduction of the number of exciton and charge-carrier losses. Moreover, we conclude by modifying the interaction strengths between the electrode surfaces and active layer components that a too low or too high affinity of an electrode surface to one of the components can lead to defective contacts, causing a deterioration of the device performance. Finally, we infer from the study of block-copolymer nanoparticle systems that particle impurities can significantly affect the nanostructure of the polymer matrix and reduce the photovoltaic performance of the active layer. For a critical volume fraction and size of the nanoparticles, we observe a complete phase transformation of the polymer nanomorphology, leading to a drop of the internal quantum efficiency. For other particle-numbers and -sizes
Tan, Z.; Zhuang, Q.; Henze, D. K.; Frankenberg, C.; Dlugokencky, E.; Sweeney, C.; Turner, A. J.
2015-11-18
Understanding methane emissions from the Arctic, a fast warming carbon reservoir, is important for projecting changes in the global methane cycle under future climate scenarios. Here we optimize Arctic methane emissions with a nested-grid high-resolution inverse model by assimilating both high-precision surface measurements and column-average SCIAMACHY satellite retrievals of methane mole fraction. For the first time, methane emissions from lakes are integrated into an atmospheric transport and inversion estimate, together with prior wetland emissions estimated by six different biogeochemical models. We find that, the global methane emissions during July 2004June 2005 ranged from 496.4 to 511.5 Tg yr?1, with wetlandmoremethane emissions ranging from 130.0 to 203.3 Tg yr?1. The Arctic methane emissions during July 2004June 2005 were in the range of 14.630.4 Tg yr?1, with wetland and lake emissions ranging from 8.8 to 20.4 Tg yr?1 and from 5.4 to 7.9 Tg yr?1 respectively. Canadian and Siberian lakes contributed most of the estimated lake emissions. Due to insufficient measurements in the region, Arctic methane emissions are less constrained in northern Russia than in Alaska, northern Canada and Scandinavia. Comparison of different inversions indicates that the distribution of global and Arctic methane emissions is sensitive to prior wetland emissions. Evaluation with independent datasets shows that the global and Arctic inversions improve estimates of methane mixing ratios in boundary layer and free troposphere. The high-resolution inversions provide more details about the spatial distribution of methane emissions in the Arctic.less
Lin, YuPo J.; Hestekin, Jamie; Arora, Michelle; St. Martin, Edward J.
2004-09-28
An electrodeionization method for continuously producing and or separating and/or concentrating ionizable organics present in dilute concentrations in an ionic solution while controlling the pH to within one to one-half pH unit method for continuously producing and or separating and/or concentrating ionizable organics present in dilute concentrations in an ionic solution while controlling the pH to within one to one-half pH unit.
Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; Weinberger, Christopher R.
2016-06-14
In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less
An Approach to Enhance pnetCDF Performance in Environmental Modeling Applications
Wong, David; Yang, Cheng-En; Fu, Joshua S.; Wong, Kwai; Gao, Yang
2015-01-01
I/O has been considered as a bottleneck in parallel applications. The software package, pnetCDF which works with parallel file systems, was developed to address this issue and provide parallel I/O capability. This study examines the performance of a novel approach which performs data aggregation along either row or column dimension of the spatial domain, and then applies the pnetCDF parallel I/O paradigm. The test was done with three different domain sizes which represents small, moderately large and large data domains, using a small scale Community Multi-scale Air Quality model (CMAQ) mocked up code. The examination includes comparing I/O performance with traditional serial I/O technique, straight application of pnetCDF, and the data aggregation along row and column dimension before applying pnetCDF. After the comparison, optimal I/O configurations for this new novel approach were quantified. Data aggregation along the row dimension (pnetCDFcr) works better than along the column dimension (pnetCDFcc) although it may perform slightly worse than straight the pnetCDF method with a small number of processors. When the number of processors becomes larger, pnetCDFcr out performs pnetCDF significantly. If the number of processors keeps increasing, pnetCDF reaches a point that the performance is even worse than the serial I/O technique. This new approach has also been tested on a real application where it performs two times better than the straight pnetCDF paradigm.
Wang, Hailong; Rasch, Philip J.; Easter, Richard C.; Singh, Balwinder; Zhang, Rudong; Ma, Po-Lun; Qian, Yun; Ghan, Steven J.; Beagley, Nathaniel
2014-11-27
We introduce an explicit emission tagging technique in the Community Atmosphere Model to quantify source-region-resolved characteristics of black carbon (BC), focusing on the Arctic. Explicit tagging of BC source regions without perturbing the emissions makes it straightforward to establish source-receptor relationships and transport pathways, providing a physically consistent and computationally efficient approach to produce a detailed characterization of the destiny of regional BC emissions and the potential for mitigation actions. Our analysis shows that the contributions of major source regions to the global BC burden are not proportional to the respective emissions due to strong region-dependent removal rates and lifetimes, while the contributions to BC direct radiative forcing show a near-linear dependence on their respective contributions to the burden. Distant sources contribute to BC in remote regions mostly in the mid- and upper troposphere, having much less impact on lower-level concentrations (and deposition) than on burden. Arctic BC concentrations, deposition and source contributions all have strong seasonal variations. Eastern Asia contributes the most to the wintertime Arctic burden. Northern Europe emissions are more important to both surface concentration and deposition in winter than in summer. The largest contribution to Arctic BC in the summer is from Northern Asia. Although local emissions contribute less than 10% to the annual mean BC burden and deposition within the Arctic, the per-emission efficiency is much higher than for major non-Arctic sources. The interannual variability (1996-2005) due to meteorology is small in annual mean BC burden and radiative forcing but is significant in yearly seasonal means over the Arctic. When a slow aging treatment of BC is introduced, the increase of BC lifetime and burden is source-dependent. Global BC forcing-per-burden efficiency also increases primarily due to changes in BC vertical distributions. The
Office of Energy Efficiency and Renewable Energy (EERE)
This document is a pre-publication Federal Register supplemental notice of proposed rulemaking regarding energy conservation standards for alternative efficiency determination methods, basic model definition, and compliance for commercial HVAC, Refrigeration, and Water Heating Equipment, as issued by the Deputy Assistant Secretary for Energy Efficiency on September 18, 2014. Though it is not intended or expected, should any discrepancy occur between the document posted here and the document published in the Federal Register, the Federal Register publication controls. This document is being made available through the Internet solely as a means to facilitate the public's access to this document.
Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28
Based on the project's scope, the purpose of the estimate, and the availability of estimating resources, the estimator can choose one or a combination of techniques when estimating an activity or project. Estimating methods, estimating indirect and direct costs, and other estimating considerations are discussed in this chapter.
Glass, J.T.
1993-01-01
Methods discussed in this compilation of notes and diagrams are Raman spectroscopy, scanning electron microscopy, transmission electron microscopy, and other surface analysis techniques (auger electron spectroscopy, x-ray photoelectron spectroscopy, electron energy loss spectroscopy, and scanning tunnelling microscopy). A comparative evaluation of different techniques is performed. In-vacuo and in-situ analyses are described.
Dingreville, Rémi; Karnesky, Richard A.; Puel, Guillaume; Schmitt, Jean -Hubert
2015-11-16
With the increasing interplay between experimental and computational approaches at multiple length scales, new research directions are emerging in materials science and computational mechanics. Such cooperative interactions find many applications in the development, characterization and design of complex material systems. This manuscript provides a broad and comprehensive overview of recent trends in which predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure–property relationships and study various physical phenomena and mechanisms. The focus of this review is on the intersections of multiscale materials experiments and modeling relevant to the materials mechanics community. After a general discussion on the perspective from various communities, the article focuses on the latest experimental and theoretical opportunities. Emphasis is given to the role of experiments in multiscale models, including insights into how computations can be used as discovery tools for materials engineering, rather than to “simply” support experimental work. This is illustrated by examples from several application areas on structural materials. In conclusion this manuscript ends with a discussion on some problems and open scientific questions that are being explored in order to advance this relatively new field of research.
Dingreville, Rémi; Karnesky, Richard A.; Puel, Guillaume; Schmitt, Jean -Hubert
2015-11-16
With the increasing interplay between experimental and computational approaches at multiple length scales, new research directions are emerging in materials science and computational mechanics. Such cooperative interactions find many applications in the development, characterization and design of complex material systems. This manuscript provides a broad and comprehensive overview of recent trends in which predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure–property relationships and study various physical phenomena and mechanisms. The focus of this review is on the intersections of multiscale materials experiments and modeling relevant to the materials mechanicsmore » community. After a general discussion on the perspective from various communities, the article focuses on the latest experimental and theoretical opportunities. Emphasis is given to the role of experiments in multiscale models, including insights into how computations can be used as discovery tools for materials engineering, rather than to “simply” support experimental work. This is illustrated by examples from several application areas on structural materials. In conclusion this manuscript ends with a discussion on some problems and open scientific questions that are being explored in order to advance this relatively new field of research.« less
Jack Parker
2007-04-19
The task objectives are: (1) Gain an improved understanding of hydrologic, geochemical and biological processes and their interactions at relevant time and space scales; and (2) Develop practical, site-independent tools for evaluating effects of natural and engineered processes on long-term performance.
Presentation given by Mississippi State University at 2015 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about a systematic...
Townsend, R.G.
1959-08-25
A method is described for protectively coating beryllium metal by etching the metal in an acid bath, immersing the etched beryllium in a solution of sodium zincate for a brief period of time, immersing the beryllium in concentrated nitric acid, immersing the beryhlium in a second solution of sodium zincate, electroplating a thin layer of copper over the beryllium, and finally electroplating a layer of chromium over the copper layer.
Novel method for carbon nanofilament growth on carbon fibers
Phillips, Johathan; Luhrs, Claudia; Terani, Mehran; Al - Haik, Marwan; Garcia, Daniel; Taha, Mahmoud R
2009-01-01
smooth walls and low impurity content were grown. Carbon nanofibers were also grown on a carbon fiber cloth using plasma enhanced chemical vapor deposition (CVD) from a mixture of acetylene and ammonia. In this case, a cobalt colloid was used to achieve a good coverage of nanofibers on carbon fibers in the cloth. Caveats to CNT growth include damage in the carbon fiber surface due to high-temperatures (>800 C). More recently, Qu et al. reported a new method for uniform deposition of CNT on carbon fibers. However, this method requires processing at 1100 C in the presence of oxygen and such high temperature is anticipated to deepen the damage in the carbon fibers. In the present work, multi-scale filaments (herein, linear carbon structures with multi-micron diameter are called 'fibers', all structures with sub-micron diameter are called 'filaments') were created with a low temperature (ca. 550 C) alternative to CVD growth of CNTs. Specifically, nano-scale filaments were rapidly generated (> 10 microns/hour) on commercial micron scale fibers via catalytic (Pd particles) growth from a fuel rich combustion environment at atmospheric pressure. This atmospheric pressure process, derived from the process called Graphitic Growth by Design (GSD), is rapid, the maximum temperature low enough (below 700 C) to avoid structural damage and the process inexpensive and readily scalable. In some cases, a significant and unexpected aspect of the process was the generation of 'three scale' materials. That is, materials with these three size characteristics were produced: (1) micrometer scale commercial PAN fibers, (2) a layer of 'long' sub-micrometer diameter scale carbon filaments, and (3) a dense layer of 'short' nanometer diameter filaments.
Hierarchical Models for Batteries: Overview with Some Case Studies
Pannala, Sreekanth; Mukherjee, Partha P; Allu, Srikanth; Nanda, Jagjit; Martha, Surendra K; Dudney, Nancy J; Turner, John A
2012-01-01
Batteries are complex multiscale systems and a hierarchy of models has been employed to study different aspects of batteries at different resolutions. For the electrochemistry and charge transport, the models span from electric circuits, single-particle, pseudo 2D, detailed 3D, and microstructure resolved at the continuum scales and various techniques such as molecular dynamics and density functional theory to resolve the atomistic structure. Similar analogies exist for the thermal, mechanical, and electrical aspects of the batteries. We have been recently working on the development of a unified formulation for the continuum scales across the electrode-electrolyte-electrode system - using a rigorous volume averaging approach typical of multiphase formulation. This formulation accounts for any spatio-temporal variation of the different properties such as electrode/void volume fractions and anisotropic conductivities. In this talk the following will be presented: The background and the hierarchy of models that need to be integrated into a battery modeling framework to carry out predictive simulations, Our recent work on the unified 3D formulation addressing the missing links in the multiscale description of the batteries, Our work on microstructure resolved simulations for diffusion processes, Upscaling of quantities of interest to construct closures for the 3D continuum description, Sample results for a standard Carbon/Spinel cell will be presented and compared to experimental data, Finally, the infrastructure we are building to bring together components with different physics operating at different resolution will be presented. The presentation will also include details about how this generalized approach can be applied to other electrochemical storage systems such as supercapacitors, Li-Air batteries, and Lithium batteries with 3D architectures.
He, W.; Anderson, R.N.
1998-08-25
A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management. 20 figs.
He, Wei; Anderson, Roger N.
1998-01-01
A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management.