Energy Consumption of Personal Computing Including Portable
Namboodiri, Vinod
Energy Consumption of Personal Computing Including Portable Communication Devices Pavel Somavat1 consumption, questions are being asked about the energy contribution of computing equipment. Al- though studies have documented the share of energy consumption by this type of equipment over the years, research
Ortman, Daniel William
1982-01-01T23:59:59.000Z
THROUGH A DUCT OR PIN-FIN DUCT . FLOW THROUGH THE LEADING EDGE CALCULATION OF BLADE SURFACE TEMPERATURE CAPABILITIES OF THE MODEL RESULTS AND CONCLUSION FUTURE WORK OPTIMIZATION OF COOLANT FLOW PARAMETERS . INCLUSION OF THREE-DIMENSIONAL CONDUCTION... pressure Downstream coolant fluid static pressure Coolant fluid coefficient of friction Hydraulic diameter of coolant duct Length of coolant duct (element) Density of fluid coolant (corrected for pressure and temperature) Velocity of the coolant...
Models of Procyon A including seismic constraints
P. Eggenberger; F. Carrier; F. Bouchy
2005-01-14T23:59:59.000Z
Detailed models of Procyon A based on new asteroseismic measurements by Eggenberger et al (2004) have been computed using the Geneva evolution code including shellular rotation and atomic diffusion. By combining all non-asteroseismic observables now available for Procyon A with these seismological data, we find that the observed mean large spacing of 55.5 +- 0.5 uHz favours a mass of 1.497 M_sol for Procyon A. We also determine the following global parameters of Procyon A: an age of t=1.72 +- 0.30 Gyr, an initial helium mass fraction Y_i=0.290 +- 0.010, a nearly solar initial metallicity (Z/X)_i=0.0234 +- 0.0015 and a mixing-length parameter alpha=1.75 +- 0.40. Moreover, we show that the effects of rotation on the inner structure of the star may be revealed by asteroseismic observations if frequencies can be determined with a high precision. Existing seismological data of Procyon A are unfortunately not accurate enough to really test these differences in the input physics of our models.
Including Blind Students in Computer Science Through Access to Graphs
Young, R. Michael
Including Blind Students in Computer Science Through Access to Graphs Suzanne Balik, Sean Mealin SKetching tool, GSK, to provide blind and sighted people with a means to create, examine, and share graphs (node-link diagrams) in real-time. GSK proved very effective for one blind computer science student
Typologies of Computation and Computational Models
Mark Burgin; Gordana Dodig-Crnkovic
2013-12-09T23:59:59.000Z
We need much better understanding of information processing and computation as its primary form. Future progress of new computational devices capable of dealing with problems of big data, internet of things, semantic web, cognitive robotics and neuroinformatics depends on the adequate models of computation. In this article we first present the current state of the art through systematization of existing models and mechanisms, and outline basic structural framework of computation. We argue that defining computation as information processing, and given that there is no information without (physical) representation, the dynamics of information on the fundamental level is physical/ intrinsic/ natural computation. As a special case, intrinsic computation is used for designed computation in computing machinery. Intrinsic natural computation occurs on variety of levels of physical processes, containing the levels of computation of living organisms (including highly intelligent animals) as well as designed computational devices. The present article offers a typology of current models of computation and indicates future paths for the advancement of the field; both by the development of new computational models and by learning from nature how to better compute using different mechanisms of intrinsic computation.
Seepage Model for PA Including Dift Collapse
G. Li; C. Tsang
2000-12-20T23:59:59.000Z
The purpose of this Analysis/Model Report (AMR) is to document the predictions and analysis performed using the Seepage Model for Performance Assessment (PA) and the Disturbed Drift Seepage Submodel for both the Topopah Spring middle nonlithophysal and lower lithophysal lithostratigraphic units at Yucca Mountain. These results will be used by PA to develop the probability distribution of water seepage into waste-emplacement drifts at Yucca Mountain, Nevada, as part of the evaluation of the long term performance of the potential repository. This AMR is in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' (CRWMS M&O 2000 [153447]). This purpose is accomplished by performing numerical simulations with stochastic representations of hydrological properties, using the Seepage Model for PA, and evaluating the effects of an alternative drift geometry representing a partially collapsed drift using the Disturbed Drift Seepage Submodel. Seepage of water into waste-emplacement drifts is considered one of the principal factors having the greatest impact of long-term safety of the repository system (CRWMS M&O 2000 [153225], Table 4-1). This AMR supports the analysis and simulation that are used by PA to develop the probability distribution of water seepage into drift, and is therefore a model of primary (Level 1) importance (AP-3.15Q, ''Managing Technical Product Inputs''). The intended purpose of the Seepage Model for PA is to support: (1) PA; (2) Abstraction of Drift-Scale Seepage; and (3) Unsaturated Zone (UZ) Flow and Transport Process Model Report (PMR). Seepage into drifts is evaluated by applying numerical models with stochastic representations of hydrological properties and performing flow simulations with multiple realizations of the permeability field around the drift. The Seepage Model for PA uses the distribution of permeabilities derived from air injection testing in niches and in the cross drift to stochastically simulate the 3D flow of water in the fractured host rock (in the vicinity of potential emplacement drifts) under ambient conditions. The Disturbed Drift Seepage Submodel evaluates the impact of the partial collapse of a drift on seepage. Drainage in rock below the emplacement drift is also evaluated.
Human-computer interface including haptically controlled interactions
Anderson, Thomas G.
2005-10-11T23:59:59.000Z
The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.
On Continuous Models of Computation: Towards Computing the Distance Between
Schellekens, Michel P.
with building formal, mathematical models both for aspects of the computational process and for features discuss this issue in Section 3.1. 6th Irish Workshop on Formal Methods (IWFM'03), eWiC, British Computer traditionally associated with computer science are logic and discrete mathematics, the latter including set theo
Campbell, Andrew T.
process #12;#include #include pid_t pid = fork(); if (pid () failed */ } else if (pid == 0) { /* parent process */ } else { /* child process */ } #12;thread #12
Poinsot, Laurent
#include #include //Rappels : "getpid()" permet d'obtenir son propre pid // "getppid()" renvoie le pid du pĂ¨re d'un processus int main (void) { pid_t pid_fils; pid_fils = fork(); if(pid_fils==-1) { printf("Erreur de crĂ©ation du processus fils\
A coke oven model including thermal decomposition kinetics of tar
Munekane, Fuminori; Yamaguchi, Yukio [Mitsubishi Chemical Corp., Yokohama (Japan); Tanioka, Seiichi [Mitsubishi Chemical Corp., Sakaide (Japan)
1997-12-31T23:59:59.000Z
A new one-dimensional coke oven model has been developed for simulating the amount and the characteristics of by-products such as tar and gas as well as coke. This model consists of both heat transfer and chemical kinetics including thermal decomposition of coal and tar. The chemical kinetics constants are obtained by estimation based on the results of experiments conducted to investigate the thermal decomposition of both coal and tar. The calculation results using the new model are in good agreement with experimental ones.
Bayesian hierarchical reconstruction of protein profiles including a digestion model
Paris-Sud XI, Université de
Bayesian hierarchical reconstruction of protein profiles including a digestion model Pierre to recover the protein biomarkers content in a robust way. We will focus on the digestion step since and each branch to a molecular processing such as digestion, ionisation and LC-MS separation
Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces
Lomov, I; Antoun, T; Vorobiev, O
2009-12-16T23:59:59.000Z
Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian hydrocode GEODYN (Lomov and Liu 2005). Lagrangian methods with conforming meshes and explicit inclusion of joints in the geologic model are well suited for such an analysis. Unfortunately, current meshing tools are unable to automatically generate adequate hexahedral meshes for large numbers of irregular polyhedra. Another concern is that joint stiffness in such explicit computations requires significantly reduced time steps, with negative implications for both the efficiency and quality of the numerical solution. An alternative approach is to use non-conforming meshes and embed joint information into regular computational elements. However, once slip displacement on the joints become comparable to the zone size, Lagrangian (even non-conforming) meshes could suffer from tangling and decreased time step problems. The use of non-conforming meshes in an Eulerian solver may alleviate these difficulties and provide a viable numerical approach for modeling the effects of faults on the dynamic response of geologic materials. We studied shock propagation in jointed/faulted media using a Lagrangian and two Eulerian approaches. To investigate the accuracy of this joint treatment the GEODYN calculations have been compared with results from the Lagrangian code GEODYN-L which uses an explicit treatment of joints via common plane contact. We explore two approaches to joint treatment in the code, one for joints with finite thickness and the other for tight joints. In all cases the sliding interfaces are tracked explicitly without homogenization or blending the joint and block response into an average response. In general, rock joints will introduce an increase in normal compliance in addition to a reduction in shear strength. In the present work we consider the limiting case of stiff discontinuities that only affect the shear strength of the material.
Parallel computing in enterprise modeling.
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.
2008-08-01T23:59:59.000Z
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.
Improved computer models support genetics research
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and computational methods...
COLLEGE OF SCIENCE Computational Modeling & Data Analytics
Crawford, T. Daniel
COLLEGE OF SCIENCE Computational Modeling & Data Analytics COLLEGE OF SCIENCE Computational Modeling & Data Analytics The Bachelor of Science in Computational Modeling and Data Analytics (CMDA mathematics. It imparts the unique blend of skills from Statistics, Mathematics, and Computer Science needed
Accounting for the Energy Consumption of Personal Computing Including Portable Devices
Namboodiri, Vinod
Accounting for the Energy Consumption of Personal Computing Including Portable Devices Pavel.S.A vinod.namboodiri@wichita.edu ABSTRACT In light of the increased awareness of global energy consumption the share of energy consumption due to these equipment over the years, these have rarely characterized
LANL computer model boosts engine efficiency
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
LANL computer model boosts engine efficiency LANL computer model boosts engine efficiency The KIVA model has been instrumental in helping researchers and manufacturers understand...
Modelling energy efficiency for computation
Reams, Charles
2012-11-13T23:59:59.000Z
In the last decade, efficient use of energy has become a topic of global significance, touching almost every area of modern life, including computing. From mobile to desktop to server, energy efficiency concerns are now ubiquitous. However...
Elastic–Plastic Spherical Contact Modeling Including Roughness Effects
Li, L.; Etsion, I.; Talke, F. E.
2010-01-01T23:59:59.000Z
A multilevel model for elastic–plastic contact between ajunction growth of an elastic–plastic spherical contact. J.nite element based elastic–plastic model for the contact of
activities including modelling: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
nonpoint source models associate pollutant loads almost exclusively agricultural sources: manure, fertilizers, soilplant complexes, and impervious surfaces and those associated...
toolkit computational mesh conceptual model.
Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.
2010-03-01T23:59:59.000Z
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
Computer Modeling Illuminates Degradation Pathways of
Computer Modeling Illuminates Degradation Pathways of Cations in Alkaline Membrane Fuel Cells Cation degradation insights obtained by computational modeling could result in better performance are effective in increasing cation stability. With the help of computational modeling, more cations are being
Including model uncertainty in risk-informed decision-making
Reinert, Joshua M
2005-01-01T23:59:59.000Z
Model uncertainties can have a significant impact on decisions regarding licensing basis changes. We present a methodology to identify basic events in the risk assessment that have the potential to change the decision and ...
DYNAMIC MODELLING OF AUTONOMOUS POWER SYSTEMS INCLUDING RENEWABLE POWER SOURCES.
Paris-Sud XI, UniversitĂ© de
(thermal, gas, diesel) and renewable (hydro, wind) power units. The objective is to assess the impact - that have a special dynamic behaviour, and the wind turbines. Detailed models for each one of the power system components are developed. Emphasis is given in the representation of different hydro power plant
Theory, Modeling and Computation
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of ScienceandMesa del SolStrengthening a solidSynthesis of 2DandEnergy The YearandTheory, Modeling
Generalized Modeling of Enrichment Cascades That Include Minor Isotopes
Weber, Charles F [ORNL
2012-01-01T23:59:59.000Z
The monitoring of enrichment operations may require innovative analysis to allow for imperfect or missing data. The presence of minor isotopes may help or hurt - they can complicate a calculation or provide additional data to corroborate a calculation. However, they must be considered in a rigorous analysis, especially in cases involving reuse. This study considers matched-abundanceratio cascades that involve at least three isotopes and allows generalized input that does not require all feed assays or the enrichment factor to be specified. Calculations are based on the equations developed for the MSTAR code but are generalized to allow input of various combinations of assays, flows, and other cascade properties. Traditional cascade models have required specification of the enrichment factor, all feed assays, and the product and waste assays of the primary enriched component. The calculation would then produce the numbers of stages in the enriching and stripping sections and the remaining assays in waste and product streams. In cases where the enrichment factor or feed assays were not known, analysis was difficult or impossible. However, if other quantities are known (e.g., additional assays in waste or product streams), a reliable calculation is still possible with the new code, but such nonstandard input may introduce additional numerical difficulties into the calculation. Thus, the minimum input requirements for a stable solution are discussed, and a sample problem with a non-unique solution is described. Both heuristic and mathematically required guidelines are given to assist the application of cascade modeling to situations involving such non-standard input. As a result, this work provides both a calculational tool and specific guidance for evaluation of enrichment cascades in which traditional input data are either flawed or unknown. It is useful for cases involving minor isotopes, especially if the minor isotope assays are desired (or required) to be important contributors to the overall analysis.
Grace, T. M.; Wag, K. J.; Horton, R. R.; Frederick, W. J.
1994-01-01T23:59:59.000Z
This paper describes an improved model of char burning during black liquor combustion that is capable of predicting net rates of sulfate reduction to sulfide as well as carbon burnup rates. Enhancements include a proper ...
CSC6870 Computer Graphics II Geometric Modeling
Hua, Jing
CSC6870 Computer Graphics II Geometric Modeling CSC6870 Computer Graphics II Overview 3D Shape, subdivision surfaces, implicit surfaces, particles. Â· Solids CSC6870 Computer Graphics II Basic Shapes CSC6870 Computer Graphics II Fundamental Shapes CSC6870 Computer Graphics II Fundamental Shapes CSC6870 Computer
CARD No. 23 Models and Computer Codes
CARD No. 23 Models and Computer Codes 23.A BACKGROUND Section 194.23 addresses the compliance criteria requirements for conceptual models and computer codes. Conceptual models capture a general (PA). The design of computer codes begins with the development of conceptual models. Conceptual models
Improved computer models support genetics research
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Simple computer models unravel genetic stress reactions in cells Integrated biological and computational methods provide insight into why genes are activated. February 8, 2013 When...
RELAP5-3D Code Includes Athena Features and Models
Richard A. Riemke; Cliff B. Davis; Richard R. Schultz
2006-07-01T23:59:59.000Z
Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, sf6, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5- 3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper.
6. Models of interactive computing David Keil Theory of Computing 2/14 6. Models of
Keil, David M.
6. Models of interactive computing David Keil Theory of Computing 2/14 1 6. Models of interactive computing 1. Sequential interaction 2. Basic models of interaction 3. Persistent TMs and equivalent models 4 of computing 6. Models of interaction 2/14 Inquiry · What's a paradigm shift? · Is interaction more powerful
Dynamic modeling of three-phase upflow fixed-bed reactor including pore diffusion C. Julcoura
Paris-Sud XI, Université de
Dynamic modeling of three-phase upflow fixed-bed reactor including pore diffusion C. Julcoura , R-phase upflow fixed-bed reactor are investigated using a non-isothermal heterogeneous model including gas not limiting, so that the simplest model predicts accurately the transient reactor behavior. Keywords: fixed-bed
aspen computer models: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Verification of computer vision Subbarao, Murali "Rao" 37 Natural Computation and Non-Turing Models of Computation* Computer Technologies and Information Sciences Websites...
Modeling of Geothermal Reservoirs: Fundamental Processes, Computer...
Reservoirs: Fundamental Processes, Computer Simulation and Field Applications Jump to: navigation, search OpenEI Reference LibraryAdd to library Journal Article: Modeling of...
Cupola Furnace Computer Process Model
Seymour Katz
2004-12-31T23:59:59.000Z
The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).
Deformable Models & Applications Department of Computer Science
Duan, Ye
Deformable Models & Applications (Part I) Ye Duan Department of Computer Science University of Missouri at Columbia December 21, 2004 Ye Duan Department of Computer Science University of Missouri at Columbia December 21, 2004 University of Missouri at ColumbiaDepartment of Computer Science #12;Department
Climate Modeling using High-Performance Computing
Mirin, A A
2007-02-05T23:59:59.000Z
The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.
Paris-Sud XI, Université de
reactor; selective hydrogenation; trickle-bed modelling. 1. Introduction Fixed-bed reactors with down1/12 Selective hydrogenation in trickle-bed reactor. Experimental and modelling including partial / ENSIACET 118 route de Narbonne 31077 TOULOUSE cedex Abstract A steady state model of a trickle bed reactor
Statistical Model Computation with UDFs Carlos Ordonez
Ordonez, Carlos
, USA Abstract--Statistical models are generally computed outside a DBMS due to their mathematical complexity. We introduce techniques to efficiently compute fundamental statistical models inside a DBMS of primitive scalar UDFs to score data sets. Experiments compare UDFs and SQL queries (running inside the DBMS
A non-isothermal PEM fuel cell model including two water transport mechanisms in the
Münster, Westfälische Wilhelms-Universität
A non-isothermal PEM fuel cell model including two water transport mechanisms in the membrane K Freiburg Germany A dynamic two-phase flow model for proton exchange mem- brane (PEM) fuel cells and the species concentrations. In order to describe the charge transport in the fuel cell the Poisson equations
MAC-Kaust Project P1 CO2 Sequestration Modeling of CO2 sequestration including parameter
Turova, Varvara
MAC-Kaust Project P1 Â CO2 Sequestration Modeling of CO2 sequestration including parameter identification and numerical simulation M. Brokate, O. A. PykhteevHysteresis aspects of CO2 sequestration modeling K-H. Hoffmann, N. D. Botkin Objectives and methods of CO2 sequestration There is a popular belief
Quantum Computation Beyond the Circuit Model
Stephen P. Jordan
2008-09-13T23:59:59.000Z
The quantum circuit model is the most widely used model of quantum computation. It provides both a framework for formulating quantum algorithms and an architecture for the physical construction of quantum computers. However, several other models of quantum computation exist which provide useful alternative frameworks for both discovering new quantum algorithms and devising new physical implementations of quantum computers. In this thesis, I first present necessary background material for a general physics audience and discuss existing models of quantum computation. Then, I present three results relating to various models of quantum computation: a scheme for improving the intrinsic fault tolerance of adiabatic quantum computers using quantum error detecting codes, a proof that a certain problem of estimating Jones polynomials is complete for the one clean qubit complexity class, and a generalization of perturbative gadgets which allows k-body interactions to be directly simulated using 2-body interactions. Lastly, I discuss general principles regarding quantum computation that I learned in the course of my research, and using these principles I propose directions for future research.
Computer aided nuclear reactor modeling
Warraich, Khalid Sarwar
1995-01-01T23:59:59.000Z
after model has been sent to CENTAR). We then present an interactive, graphical, icon based modeling program, Alpha, that lets the user "draw" the model on screen and translates it into a syn tactically correct CENTAR input model which is also free...
Modelling morphogenesis as an amorphous computation
Bhattacharyya, Arnab
2006-01-01T23:59:59.000Z
This thesis presents a programming-language viewpoint for morphogenesis, the process of shape formation during embryological development. We model morphogenesis as a self-organizing, self-repairing amorphous computation ...
Modelling Cloud Computing Infrastructure Marianne Hickey and Maher Rahmouni,
Paris-Sud XI, Université de
Modelling Cloud Computing Infrastructure Marianne Hickey and Maher Rahmouni, HP Labs, Long Down, and shared vocabularies. Keywords: Modelling, Cloud Computing, RDF, Ontology, Rules, Validation 1 Introduction There is currently a shift towards cloud computing, which changes the model of provision
Grace, T. M.; Wag, K. J.; Horton, R. R.; Frederick, W. J.
gasification, reactions between oxygen and combustibles in the boundary layer, and integration of sulfate reduction and sulfide reoxidation into the char burning process. Simulations using the model show that for typical recovery boiler conditions, char burning...
Computer modeling reveals how surprisingly potent hepatitis C...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Hepatitis C computer modeling Computer modeling reveals how surprisingly potent hepatitis C drug works A study reveals how daclatasvir targets one of its proteins and causes the...
HIV virus spread and evolution studied through computer modeling
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
HIV and evolution studied through computer modeling HIV virus spread and evolution studied through computer modeling This approach distinguishes between susceptible and infected...
Computationally Efficient Modeling of High-Efficiency Clean Combustion...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
& Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines...
Computer aided nuclear reactor modeling
Warraich, Khalid Sarwar
1995-01-01T23:59:59.000Z
CENTAR model), and by interacting with the user and providing feedback by checking for errors and advising corrections. The architecture of Alpha is presented with its constituent libraries explained in their internal working and external interactions...
A wave equation including leptons and quarks for the standard model of quantum physics in
Boyer, Edmond
A wave equation including leptons and quarks for the standard model of quantum physics in Clifford-m@orange.fr August 27, 2014 Abstract A wave equation with mass term is studied for all particles and an- tiparticles of color and antiquarks u and d. This wave equation is form invariant under the Cl 3 group generalizing
A macroscopic 1D model for shape memory alloys including asymmetric behaviors and
Stefanelli, Ulisse
A macroscopic 1D model for shape memory alloys including asymmetric behaviors and transformation behavior of shape memory alloys (SMAs) has been widely growing in last years because of the in- creasing reproduce main macroscopic SMA behaviors (i.e., superelasticity and shape-memory effect), with- out however
Boyer, Edmond
Time-dependent model for diluted magnetic semiconductors including band structure and confinement dynamics in confined diluted magnetic semiconductors induced by laser. The hole-spin relaxation process light-induced magnetization dynamics in ferro- magnetic films and in diluted magnetic semiconductors DMS
Coupling time decoding and trajectory decoding using a target-included model in the motor cortex
Wu, Wei
Coupling time decoding and trajectory decoding using a target-included model in the motor cortex Communicated by D. Erdogmus Available online 27 December 2011 Keywords: Neural decoding Motor cortex Target made within the last decade in motor cortical decoding that predicts movement behaviors from population
Scot Martin
2013-01-31T23:59:59.000Z
The chemical evolution of secondary-organic-aerosol (SOA) particles and how this evolution alters their cloud-nucleating properties were studied. Simplified forms of full Koehler theory were targeted, specifically forms that contain only those aspects essential to describing the laboratory observations, because of the requirement to minimize computational burden for use in integrated climate and chemistry models. The associated data analysis and interpretation have therefore focused on model development in the framework of modified kappa-Koehler theory. Kappa is a single parameter describing effective hygroscopicity, grouping together several separate physicochemical parameters (e.g., molar volume, surface tension, and van't Hoff factor) that otherwise must be tracked and evaluated in an iterative full-Koehler equation in a large-scale model. A major finding of the project was that secondary organic materials produced by the oxidation of a range of biogenic volatile organic compounds for diverse conditions have kappa values bracketed in the range of 0.10 +/- 0.05. In these same experiments, somewhat incongruently there was significant chemical variation in the secondary organic material, especially oxidation state, as was indicated by changes in the particle mass spectra. Taken together, these findings then support the use of kappa as a simplified yet accurate general parameter to represent the CCN activation of secondary organic material in large-scale atmospheric and climate models, thereby greatly reducing the computational burden while simultaneously including the most recent mechanistic findings of laboratory studies.
A Computational Model for Adaptive Emotion Regulation
Treur, Jan
A Computational Model for Adaptive Emotion Regulation Tibor Bosse, Matthijs Pontier, and Jan Treur} Abstract. Emotion regulation describes how a subject can use certain strategies to affect emotion response levels. Usually, models for emotion regulation as- sume mechanisms based on feedback loops that indicate
ASSISTANT PROFESSOR OF MECHANICAL ENGINEERING COMPUTATIONAL MODELING
ASSISTANT PROFESSOR OF MECHANICAL ENGINEERING COMPUTATIONAL MODELING COLLEGE OF ENGINEERING The Department of Mechanical Engineering at Colorado State University invites applications for a tenure processes with emphasis on applying the models to engineering systems of interest in the energy or materials
Mechanistic Models in Computational Social Science
Holme, Petter
2015-01-01T23:59:59.000Z
Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes -- to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emerging phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from natural and formal sciences. We argue that mechanistic computational models form a natural common ground for social and natural sciences, and look forward to possible future information flow across the social-natural divide.
Computer Models in Astronomy and Statistics Stellar Evolution
van Dyk, David
Computer Models in Astronomy and Statistics Stellar Evolution Calibration of X-ray Detectors Embedding Astronomical Computer Models into Complex Statistical Models David A. van Dyk Statistics Section Dyk Complex Analyses with Computer Models in Astronomy #12;Computer Models in Astronomy and Statistics
Decision Model for Cloud Computing
Kondo, Derrick
with different pricing models for cost-cutting, resource-hungry users. Second, prices can differ dynamically (as Grenoble, France 1 #12;Trade-offs Supercomputers Performance Reliability Cost ($) low high high high Instances Â· "Spot" instance price varies dynamically Â· Spot instance provided when user's bid is greater
CDF computing and event data models
Snider, F.D.; /Fermilab
2005-12-01T23:59:59.000Z
The authors discuss the computing systems, usage patterns and event data models used to analyze Run II data from the CDF-II experiment at the Tevatron collider. A critical analysis of the current implementation and design reveals some of the stronger and weaker elements of the system, which serve as lessons for future experiments. They highlight a need to maintain simplicity for users in the face of an increasingly complex computing environment.
Countable Models, Computability, and Enumerations, Valentina Harizanov
Harizanov, Valentina S.
. Â· A Scott family for A is a set of formulas, with a fixed finite tuple of parameters c in A, such that each diagram of A, D(A). A is computable (recursive) if its Turing degree is 0. Â· D(A) may be of much lower Turing degree than Th(A). N, the standard model of arithmetic, is computable. True Arithmetic, TA = Th
Computer modeling of the global warming effect
Washington, W.M. [National Center for Atmospheric Research, Boulder, CO (United States)
1993-12-31T23:59:59.000Z
The state of knowledge of global warming will be presented and two aspects examined: observational evidence and a review of the state of computer modeling of climate change due to anthropogenic increases in greenhouse gases. Observational evidence, indeed, shows global warming, but it is difficult to prove that the changes are unequivocally due to the greenhouse-gas effect. Although observational measurements of global warming are subject to ``correction,`` researchers are showing consistent patterns in their interpretation of the data. Since the 1960s, climate scientists have been making their computer models of the climate system more realistic. Models started as atmospheric models and, through the addition of oceans, surface hydrology, and sea-ice components, they then became climate-system models. Because of computer limitations and the limited understanding of the degree of interaction of the various components, present models require substantial simplification. Nevertheless, in their present state of development climate models can reproduce most of the observed large-scale features of the real system, such as wind, temperature, precipitation, ocean current, and sea-ice distribution. The use of supercomputers to advance the spatial resolution and realism of earth-system models will also be discussed.
The Food Crises: A quantitative model of food prices including speculators and ethanol conversion
Lagi, Marco; Bertrand, Karla Z; Bar-Yam, Yaneer
2011-01-01T23:59:59.000Z
Recent increases in basic food prices are severely impacting vulnerable populations worldwide. Proposed causes such as shortages of grain due to adverse weather, increasing meat consumption in China and India, conversion of corn to ethanol in the US, and investor speculation on commodity markets lead to widely differing implications for policy. A lack of clarity about which factors are responsible reinforces policy inaction. Here, for the first time, we construct a dynamic model that quantitatively agrees with food prices. The results show that the dominant causes of price increases are investor speculation and ethanol conversion. Models that just treat supply and demand are not consistent with the actual price dynamics. The two sharp peaks in 2007/2008 and 2010/2011 are specifically due to investor speculation, while an underlying upward trend is due to increasing demand from ethanol conversion. The model includes investor trend following as well as shifting between commodities, equities and bonds to take ad...
A Model for the Human Computer Interface Evaluation in Safety Critical Computer
Schreiber, Fabio A.
A Model for the Human Computer Interface Evaluation in Safety Critical Computer Applications Fabio of the IEEE International Conference and Workshop: Engineering of ComputerBased Systems March 1998, Jerusalem, Israel #12; 179 A Model for the Human Computer Interface Evaluation in Safety Critical Computer
Modeling Computations in a Semantic Network
Marko A. Rodriguez; Johan Bollen
2007-05-31T23:59:59.000Z
Semantic network research has seen a resurgence from its early history in the cognitive sciences with the inception of the Semantic Web initiative. The Semantic Web effort has brought forth an array of technologies that support the encoding, storage, and querying of the semantic network data structure at the world stage. Currently, the popular conception of the Semantic Web is that of a data modeling medium where real and conceptual entities are related in semantically meaningful ways. However, new models have emerged that explicitly encode procedural information within the semantic network substrate. With these new technologies, the Semantic Web has evolved from a data modeling medium to a computational medium. This article provides a classification of existing computational modeling efforts and the requirements of supporting technologies that will aid in the further growth of this burgeoning domain.
High performance computing and numerical modelling
,
2014-01-01T23:59:59.000Z
Numerical methods play an ever more important role in astrophysics. This is especially true in theoretical works, but of course, even in purely observational projects, data analysis without massive use of computational methods has become unthinkable. The key utility of computer simulations comes from their ability to solve complex systems of equations that are either intractable with analytic techniques or only amenable to highly approximative treatments. Simulations are best viewed as a powerful complement to analytic reasoning, and as the method of choice to model systems that feature enormous physical complexity such as star formation in evolving galaxies, the topic of this 43rd Saas Fee Advanced Course. The organizers asked me to lecture about high performance computing and numerical modelling in this winter school, and to specifically cover the basics of numerically treating gravity and hydrodynamics in the context of galaxy evolution. This is still a vast field, and I necessarily had to select a subset ...
Vortices in superconductors: modelling and computer simulations
Du, Qiang
Vortices in superconductors: modelling and computer simulations B y Jennifer Deang1 , Qiang D u2 Vortices in superconductors are tubes of magnetic flux, or equivalently, cylindrical current loops is of importance both to the understanding of the basic physics of superconductors and to the design of devices. We
Kim, G.; Pesaran, A.; Smith, K.; Graf, P.; Jun, M.; Yang, C.; Li, G.; Li, S.; Hochman, A.; Tselepidakis, D.; White, J.
2014-06-01T23:59:59.000Z
This presentation discusses the significant enhancement of computational efficiency in nonlinear multiscale battery model for computer aided engineering in current research at NREL.
Grids of stellar models including second harmonic and colours: Solar composition
Yildiz, Mutlu
2015-01-01T23:59:59.000Z
Grids of stellar evolution are required in many fields of astronomy/astrophysics, such as planet hosting stars, binaries, clusters, chemically peculiar stars, etc. In this study, a grid of stellar evolution models with updated ingredients and {recently determined solar abundaces} is presented. The solar values for the initial abundances of hydrogen, heavy elements and mixing-length parameter are 0.0172, 0.7024 and 1.98, respectively. The mass step is small enough (0.01 M$_\\odot$) that interpolation for a given star mass is not required. The range of stellar mass is 0.74 to 10.00 M$_\\odot$. We present results in different forms of tables for easy and general application. The second stellar harmonic, required for analysis of apsidal motion of eclipsing binaries, is also listed. We also construct rotating models to determine effect of rotation on stellar structure and derive fitting formula for luminosity, radius and the second stellar harmonic as a function of rotational parameter. We also compute and list colo...
Computational Modeling of Self-organization of Dislocations and...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Computational Modeling of Self-organization of Dislocations and Mesoscale Deformation of Metals Event Sponsor: Mathematics and Computing Science - LANS Seminar Start Date: Jun 19...
Advanced Computing Tools and Models for Accelerator Physics
Ryne, Robert D.
2008-01-01T23:59:59.000Z
TOOLS AND MODELS FOR ACCELERATOR PHYSICS * Robert D. Ryne,computing tools for accelerator physics. Following anscale computing in accelerator physics. INTRODUCTION To
Habitability of Super-Earth Planets around Other Suns: Models including Red Giant Branch Evolution
W. von Bloh; M. Cuntz; K. -P. Schroeder; C. Bounama; S. Franck
2008-12-04T23:59:59.000Z
The unexpected diversity of exoplanets includes a growing number of super- Earth planets, i.e., exoplanets with masses of up to several Earth masses and a similar chemical and mineralogical composition as Earth. We present a thermal evolution model for a 10 Earth mass planet orbiting a star like the Sun. Our model is based on the integrated system approach, which describes the photosynthetic biomass production taking into account a variety of climatological, biogeochemical, and geodynamical processes. This allows us to identify a so-called photosynthesis-sustaining habitable zone (pHZ) determined by the limits of biological productivity on the planetary surface. Our model considers the solar evolution during the main-sequence stage and along the Red Giant Branch as described by the most recent solar model. We obtain a large set of solutions consistent with the principal possibility of life. The highest likelihood of habitability is found for "water worlds". Only mass-rich water worlds are able to realize pHZ-type habitability beyond the stellar main-sequence on the Red Giant Branch.
Computer Modelling of 3D Geological Surface
Kodge, B G
2011-01-01T23:59:59.000Z
The geological surveying presently uses methods and tools for the computer modeling of 3D-structures of the geographical subsurface and geotechnical characterization as well as the application of geoinformation systems for management and analysis of spatial data, and their cartographic presentation. The objectives of this paper are to present a 3D geological surface model of Latur district in Maharashtra state of India. This study is undertaken through the several processes which are discussed in this paper to generate and visualize the automated 3D geological surface model of a projected area.
Computer Models in Astronomy and Statistics Stellar Evolution
van Dyk, David
Computer Models in Astronomy and Statistics Stellar Evolution Calibration of X-ray Detectors Embedding Astronomical Computer Models into Complex Statistical Models David A. van Dyk Statistics Section, Imperial College London UCLA, February 2012 David A. van Dyk Complex Analyses with Computer Models
Wild Fire Computer Model Helps Firefighters
Canfield, Jesse
2014-06-02T23:59:59.000Z
A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.
Computer models for evaluating financial decision alternatives
Christian, James Carroll
1973-01-01T23:59:59.000Z
973 Major Subject: Industrial Engineering COMPUTER MODELS FOR EVALUATING FINANCIAL DECISION ALTERNATIVES A Thesis by JAMES CARROLL CHRISTIAN Approved as to style and content by: . '; . . i', , ( (Chairman of Committee) (Head of Depar n... of this research is to bridge this gap by de- veloping the methodology necessary to solve personal finance problems in a quantitative method through the application of engineering economy principles. ACKNOWLEDGEMENTS I would like to express my sincere...
Computational fire modeling for aircraft fire research
Nicolette, V.F.
1996-11-01T23:59:59.000Z
This report summarizes work performed by Sandia National Laboratories for the Federal Aviation Administration. The technical issues involved in fire modeling for aircraft fire research are identified, as well as computational fire tools for addressing those issues, and the research which is needed to advance those tools in order to address long-range needs. Fire field models are briefly reviewed, and the VULCAN model is selected for further evaluation. Calculations are performed with VULCAN to demonstrate its applicability to aircraft fire problems, and also to gain insight into the complex problem of fires involving aircraft. Simulations are conducted to investigate the influence of fire on an aircraft in a cross-wind. The interaction of the fuselage, wind, fire, and ground plane is investigated. Calculations are also performed utilizing a large eddy simulation (LES) capability to describe the large- scale turbulence instead of the more common k-{epsilon} turbulence model. Additional simulations are performed to investigate the static pressure and velocity distributions around a fuselage in a cross-wind, with and without fire. The results of these simulations provide qualitative insight into the complex interaction of a fuselage, fire, wind, and ground plane. Reasonable quantitative agreement is obtained in the few cases for which data or other modeling results exist Finally, VULCAN is used to quantify the impact of simplifying assumptions inherent in a risk assessment compatible fire model developed for open pool fire environments. The assumptions are seen to be of minor importance for the particular problem analyzed. This work demonstrates the utility of using a fire field model for assessing the limitations of simplified fire models. In conclusion, the application of computational fire modeling tools herein provides both qualitative and quantitative insights into the complex problem of aircraft in fires.
INTERIOR MODELS OF SATURN: INCLUDING THE UNCERTAINTIES IN SHAPE AND ROTATION
Helled, Ravit [Department of Geophysics, Atmospheric and Planetary Sciences, Tel-Aviv University, Tel-Aviv (Israel); Guillot, Tristan [Universite de Nice-Sophia Antipolis, Observatoire de la Cote d'Azur, CNRS UMR 7293, BP 4229, F-06304 Nice (France)
2013-04-20T23:59:59.000Z
The accurate determination of Saturn's gravitational coefficients by Cassini could provide tighter constraints on Saturn's internal structure. Also, occultation measurements provide important information on the planetary shape which is often not considered in structure models. In this paper we explore how wind velocities and internal rotation affect the planetary shape and the constraints on Saturn's interior. We show that within the geodetic approach the derived physical shape is insensitive to the assumed deep rotation. Saturn's re-derived equatorial and polar radii at 100 mbar are found to be 54,445 {+-} 10 km and 60,365 {+-} 10 km, respectively. To determine Saturn's interior, we use one-dimensional three-layer hydrostatic structure models and present two approaches to include the constraints on the shape. These approaches, however, result in only small differences in Saturn's derived composition. The uncertainty in Saturn's rotation period is more significant: with Voyager's 10{sup h}39{sup m} period, the derived mass of heavy elements in the envelope is 0-7 M{sub Circled-Plus }. With a rotation period of 10{sup h}32{sup m}, this value becomes <4 M{sub Circled-Plus }, below the minimum mass inferred from spectroscopic measurements. Saturn's core mass is found to depend strongly on the pressure at which helium phase separation occurs, and is estimated to be 5-20 M{sub Circled-Plus }. Lower core masses are possible if the separation occurs deeper than 4 Mbar. We suggest that the analysis of Cassini's radio occultation measurements is crucial to test shape models and could lead to constraints on Saturn's rotation profile and departures from hydrostatic equilibrium.
7. Business Models LearningsfromfoundingaComputerVisionStartup
Solem, Jan Erik
7. Business Models #12;LearningsfromfoundingaComputerVisionStartup Flickr:dystopos How are you models ! ! (not only technology) #12;LearningsfromfoundingaComputerVisionStartup Auction business model! Bricks and clicks business model! Collective business models! Component business model! Cutting out
7. Business Models LearningsfromfoundingaComputerVisionStartup
Quack, Till
7. Business Models #12;LearningsfromfoundingaComputerVisionStartup Flickr:dystopos How are you models (not only technology) #12;LearningsfromfoundingaComputerVisionStartup Auction business model Bricks and clicks business model Collective business models Component business model Cutting out
Modeling-Computer Simulations At Dixie Valley Geothermal Area...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Dixie Valley Geothermal Area (Wisian & Blackwell, 2004) Exploration...
Modeling-Computer Simulations At Stillwater Area (Wisian & Blackwell...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Stillwater Area (Wisian & Blackwell, 2004) Exploration Activity...
Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal Area (Wilt & Haar, 1986)...
Modeling-Computer Simulations At Desert Peak Area (Wisian & Blackwell...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Desert Peak Area (Wisian & Blackwell, 2004) Exploration Activity...
Modeling-Computer Simulations At White Mountains Area (Goff ...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At White Mountains Area (Goff & Decker, 1983) Exploration Activity...
An Interactive Computer Model of Two-Country Trade
Hamlen, Kevin W.
91 An Interactive Computer Model of Two-Country Trade Bill Hamlen and Kevin Hamlen Abstract We introduce an interactive computer model of two-country trade that allows students to investigate is to present an interactive computer model of two-country international trade that allows students
Modeling-Computer Simulations At Long Valley Caldera Geothermal...
Modeling-Computer Simulations Activity Date - 2003 Usefulness not indicated DOE-funding Unknown Notes Several fluid-flow models presented regarding the Long Valley Caldera....
King, Michael J. (Michael James), 1978-
2006-01-01T23:59:59.000Z
Woven fabrics are used in many applications, including ballistic armors and fabric-reinforced composites. Advances in small-scale technologies are enabling new applications including fabrics with embedded electronics, ...
Iyengar, Srinivasan S.
the precise vibrational signatures that contribute to dynamics in soft-mode hydrogen-bonded systems ReceiVed: June 12, 2007; In Final Form: August 11, 2007 We have introduced a computational methodology of hydrogen-bonded systems and hydrogen transfer extends beyond fundamental chemistry and well into the areas
Computational models of intergroup competition and warfare.
Letendre, Kenneth (University of New Mexico); Abbott, Robert G.
2011-11-01T23:59:59.000Z
This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.
Polarization of metallic carbon nanotubes from a model that includes both net charges and dipoles
Mayer, Alexandre
computational resources. This time issue is especially important in molecular dynamic simulations, in which both a net electric charge and a dipole. Considering net charges in addition to the dipoles enables one by associating with each atom both a net electric charge and a dipole. From a physical point of view, consid
Bytecode unification of geospatial computable models Bytecode unification of geospatial
Köbben, Barend
Bytecode unification of geospatial computable models Bytecode unification of geospatial computable models by Jan Kolár Grifinor Project. jan.kolar@grifinor.net Abstract Geospatial modelling revolves heterogeneous to fix and reuse. Field-based and objects-based geospatial models of- ten share common GIS data
Preliminary Phase Field Computational Model Development
Li, Yulan; Hu, Shenyang Y.; Xu, Ke; Suter, Jonathan D.; McCloy, John S.; Johnson, Bradley R.; Ramuhalli, Pradeep
2014-12-15T23:59:59.000Z
This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in experiments, special experimental methods were devised to create similar boundary conditions in the iron films. Preliminary MFM studies conducted on single and polycrystalline iron films with small sub-areas created with focused ion beam have correlated quite well qualitatively with phase-field simulations. However, phase-field model dimensions are still small relative to experiments thus far. We are in the process of increasing the size of the models and decreasing specimen size so both have identical dimensions. Ongoing research is focused on validation of the phase-field model. Validation is being accomplished through comparison with experimentally obtained MFM images (in progress), and planned measurements of major hysteresis loops and first order reversal curves. Extrapolation of simulation sizes to represent a more stochastic bulk-like system will require sampling of various simulations (i.e., with single non-magnetic defect, single magnetic defect, single grain boundary, single dislocation, etc.) with distributions of input parameters. These outputs can then be compared to laboratory magnetic measurements and ultimately to simulate magnetic Barkhausen noise signals.
Math 574 --Optimization Models in Computational Biology (Spring 2008)
Krishnamoorthy, Bala
Bioinformatics and computational biology (BCB) is one of the "hottest" interdisciplinary areas of sci- ence todayMath 574 -- Optimization Models in Computational Biology (Spring 2008) Course Title Topics in Optimization: Models in Computational Biology Time Tue-Thu 12:001:15 pm Credits 3 Location Webster B12
Terzopoulos, Demetri
describe a facial animation project that uses specialized imaging devices to capture models of human heads Vision to Computer Graphics). Visual Modeling for Computer Animation: Graphics with a Vision Demetri a personal retrospective on image-based modeling for computer animation. As we shall see, one of the projects
Computer modeling of corrosion in absorption cooling cycles
Anderko, A.; Young, R.D. [OLI Systems Inc., Morris Plains, NJ (United States)
1999-11-01T23:59:59.000Z
A comprehensive model has been developed for the computation of corrosion rates of carbon steels in the presence of lithium bromide-based brines that are used as working fluids for absorption refrigeration cycles. The model combines a thermodynamic model that provides realistic speciation of aqueous systems with an electrochemical model for partial cathodic and anodic processes on the metal surface. The electrochemical model includes the adsorption of halides, which strongly influences the corrosion process. Also, the model takes into account the formation of passive films, which become important at high temperatures, at which the refrigeration equipment operates. The model has been verified by comparing calculated corrosion rates with laboratory data for carbon steels in LiBr solutions. Good agreement between the calculated and experimental corrosion rates has been obtained. In particular, the model is capable of reproducing the effects of changes in alkalinity and molybdate concentration on the rates of general corrosion. The model has been incorporated into a program that makes it possible to analyze the effects of various conditions such as temperature, pressure, solution composition or flow velocity on corrosion rates.
Integration of engineering models in computer-aided preliminary design
Lajoie, Ronnie M.
The problems of the integration of engineering models in computer-aided preliminary design are reviewed. This paper details the research, development, and testing of modifications to Paper Airplane, a LISP-based computer ...
Simplified method to include the tensor contribution in {alpha}-cluster model
Itagaki, N. [Hahn-Meitner-Institut Berlin, D-140109 Berlin (Germany); Department of Physics, University of Tokyo, Hongo, 113-0033 Tokyo (Japan); Masui, H. [Information Processing Center, Kitami Institute of Technology, 090-8507 Kitami (Japan); Ito, M. [Institute of Physics, University of Tsukuba, 305-8571 Tsukuba (Japan); Aoyama, S. [Integrated Information Processing Center, Niigata University, 950-2181 Niigata (Japan); Ikeda, K. [The Institute of Physical and Chemical Research (RIKEN), Wako 351-0098 (Japan)
2006-03-15T23:59:59.000Z
We propose a simplified model to directly take into account the contribution of the tensor interaction (SMT) for light nuclei by extending the {alpha}-cluster model. In {sup 8}Be, the energy curve with respect to the relative distance between the two {sup 4}He clusters suggests that the cluster structure persists even though the tensor interaction contributes strongly. In addition to SMT, a simplified method to take into account the strong spin-orbit contribution is introduced and the coupling effects of these two models is shown to be important in {sup 12}C, in contrast to {sup 8}Be.
Dyer, Bill
Student Ownership of Work Created in Computer Science Classes and Projects Ownership of software, including the source code, that students create as part of his or her MSU education activities a perpetual royaltyfree nonexclusive right to use the source code and make derivative works for educational
A wavebased model for the marginal ice zone including a floe breaking parameterization
,2 A. Kohout,3 and L. Bertino1 Received 1 October 2010; revised 10 December 2010; accepted 29 December implementation into twodimensional sea ice models in mind. Citation: Dumont, D., A. Kohout, and L. Bertino (2011
Computational model of miniature pulsating heat pipes.
Martinez, Mario J.; Givler, Richard C.
2013-01-01T23:59:59.000Z
The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative three-year project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal ground-plane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.
Department of Computing CSP||B modelling for railway verification
Doran, Simon J.
University of Surrey Department of Computing Computing Sciences Report CS-12-03 CSP||B modelling Schneider Helen Treharne March 30th 2012 #12;CSP||B modelling for railway verification: the double junction work in verifying railway systems through CSP k B modelling and analysis. In particular we consider
Disruptive technology business models in cloud computing
Krikos, Alexis Christopher
2010-01-01T23:59:59.000Z
Cloud computing, a term whose origins have been in existence for more than a decade, has come into fruition due to technological capabilities and marketplace demands. Cloud computing can be defined as a scalable and flexible ...
Modeling-Computer Simulations At Central Nevada Seismic Zone...
Central Nevada Seismic Zone Geothermal Region Exploration Technique Modeling-Computer Simulations Activity Date Usefulness not indicated DOE-funding Unknown References J. W....
Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area...
Modeling-Computer Simulations At Fenton Hill HDR Geothermal Area (Goff & Decker, 1983) Exploration Activity Details Location Fenton Hill HDR Geothermal Area Exploration Technique...
Modeling-Computer Simulations At Nw Basin & Range Region (Biasi...
Location Northwest Basin and Range Geothermal Region Exploration Technique Modeling-Computer Simulations Activity Date Usefulness useful regional reconnaissance DOE-funding...
Modeling-Computer Simulations At Long Valley Caldera Geothermal...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Farrar, Et Al., 2003) Exploration...
Modeling-Computer Simulations At Long Valley Caldera Geothermal...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Battaglia, Et Al., 2003)...
Modeling-Computer Simulations At Akutan Fumaroles Area (Kolker...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Akutan Fumaroles Area (Kolker, Et Al., 2010) Exploration Activity...
Modeling-Computer Simulations At San Juan Volcanic Field Area...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At San Juan Volcanic Field Area (Clarkson & Reiter, 1987) Exploration...
Modeling-Computer Simulations At Central Nevada Seismic Zone...
Modeling-Computer Simulations At Central Nevada Seismic Zone Region (Biasi, Et Al., 2009) Exploration Activity Details Location Central Nevada Seismic Zone Geothermal Region...
Modeling-Computer Simulations At Chocolate Mountains Area (Alm...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Chocolate Mountains Area (Alm, Et Al., 2010) Exploration Activity...
Modeling-Computer Simulations At Walker-Lane Transitional Zone...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Walker-Lane Transitional Zone Region (Biasi, Et Al., 2009) Exploration...
Modeling-Computer Simulations At Long Valley Caldera Geothermal...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Long Valley Caldera Geothermal Area (Tempel, Et Al., 2011) Exploration...
Modeling-Computer Simulations At Northern Basin & Range Region...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Northern Basin & Range Region (Biasi, Et Al., 2009) Exploration...
Modeling-Computer Simulations At Valles Caldera - Sulphur Springs...
Sulphur Springs Geothermal Area Exploration Technique Modeling-Computer Simulations Activity Date 1987 - 1995 Usefulness useful DOE-funding Unknown Notes A modification of the...
Modeling-Computer Simulations At Long Valley Caldera Geothermal...
Details Location Long Valley Caldera Geothermal Area Exploration Technique Modeling-Computer Simulations Activity Date 1995 - 2000 Usefulness not indicated DOE-funding Unknown...
Modeling-Computer Simulations At Northern Basin & Range Region...
Northern Basin and Range Geothermal Region Exploration Technique Modeling-Computer Simulations Activity Date Usefulness not indicated DOE-funding Unknown References J. W. Pritchett...
Modeling-Computer Simulations At Kilauea East Rift Geothermal...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Kilauea East Rift Geothermal Area (Rudman & Epp, 1983) Exploration...
Modeling-Computer Simulations At Walker-Lane Transitional Zone...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Walker-Lane Transitional Zone Region (Pritchett, 2004) Exploration...
Modeling-Computer Simulations At Nw Basin & Range Region (Pritchett...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Nw Basin & Range Region (Pritchett, 2004) Exploration Activity Details...
Modeling-Computer Simulations At Valles Caldera - Sulphur Springs...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Valles Caldera - Sulphur Springs Geothermal Area (Wilt & Haar, 1986)...
Modeling-Computer Simulations At Dixie Valley Geothermal Area...
navigation, search GEOTHERMAL ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Dixie Valley Geothermal Area (Wannamaker, Et Al., 2006) Exploration...
Scientists use world's fastest computer to model materials under...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Materials under extreme conditions Scientists use world's fastest computer to model materials under extreme conditions Materials scientists are for the first time attempting to...
Modeling-Computer Simulations At Fish Lake Valley Area (Deymonaz...
Additional References Retrieved from "http:en.openei.orgwindex.php?titleModeling-ComputerSimulationsAtFishLakeValleyArea(Deymonaz,EtAl.,2008)&oldid387627...
Computationally Efficient Modeling of High-Efficiency Clean Combustion...
Broader source: Energy.gov (indexed) [DOE]
Volvo; multi-zone cycle simulation, OpenFOAM model development Bosch; High Performance Computing of HCCISI transition Delphi; direct injection GE Research; new...
MA598: Modeling and Computation in Optics and Electromagnetics
2010-08-24T23:59:59.000Z
MA598: Modeling and Computation in Optics and Electromagnetics. Instructor: Peijun Li, office: Math 440, phone: 49-40846, e-mail: lipeijun@math.purdue.edu.
MA692: Modeling and Computation in Optics and Electromagnetics
2012-08-14T23:59:59.000Z
MA692: Modeling and Computation in Optics and Electromagnetics. Instructor: Peijun Li, office: Math 440, phone: 49-40846, e-mail: lipeijun@math.purdue.edu.
Modeling-Computer Simulations At Nevada Test And Training Range...
ENERGYGeothermal Home Exploration Activity: Modeling-Computer Simulations At Nevada Test And Training Range Area (Sabin, Et Al., 2004) Exploration Activity Details Location...
Contribution of muscular weakness to osteoporosis: Computational and animal models
Gefen, Amit
Contribution of muscular weakness to osteoporosis: Computational and animal models M. Be obtained herein indicate that muscular weakness may be an important factor contributing to osteoporosis. Ó
Computer Graphics Volume 15, Number3 August 1981 A REFLECTANCE MODEL FOR COMPUTER GRAPHICS
O'Brien, James F.
Computer Graphics Volume 15, Number3 August 1981 A REFLECTANCE MODEL FOR COMPUTER GRAPHICS Robert L. Cook Program of Computer Graphics Cornell University Ithaca, New York 14853 Kenneth E. Torrance Sibley with incidence angle. The paper presents a method for obtaining the spectral energy distribution of the light
A stepped leader model for lightning including charge distribution in branched channels
Shi, Wei; Zhang, Li [School of Electrical Engineering, Shandong University, Jinan 250061 (China); Li, Qingmin, E-mail: lqmeee@ncepu.edu.cn [Beijing Key Lab of HV and EMC, North China Electric Power University, Beijing 102206 (China); State Key Lab of Alternate Electrical Power System with Renewable Energy Sources, Beijing 102206 (China)
2014-09-14T23:59:59.000Z
The stepped leader process in negative cloud-to-ground lightning plays a vital role in lightning protection analysis. As lightning discharge usually presents significant branched or tortuous channels, the charge distribution along the branched channels and the stochastic feature of stepped leader propagation were investigated in this paper. The charge density along the leader channel and the charge in the leader tip for each lightning branch were approximated by introducing branch correlation coefficients. In combination with geometric characteristics of natural lightning discharge, a stochastic stepped leader propagation model was presented based on the fractal theory. By comparing simulation results with the statistics of natural lightning discharges, it was found that the fractal dimension of lightning trajectory in simulation was in the range of that observed in nature and the calculation results of electric field at ground level were in good agreement with the measurements of a negative flash, which shows the validity of this proposed model. Furthermore, a new equation to estimate the lightning striking distance to flat ground was suggested based on the present model. The striking distance obtained by this new equation is smaller than the value estimated by previous equations, which indicates that the traditional equations may somewhat overestimate the attractive effect of the ground.
Kinetic Model for Motion Compensation in Computed Tomography
1 Kinetic Model for Motion Compensation in Computed Tomography Zhou Yu, Jean-Baptiste Thibault- gorithms have recently been applied to computed tomography and demonstrated superior image quality. MBIR to computed tomography and demonstrated superior image quality performance [1], [2], [3]. These methods
The equation-transform model for Dirac–Morse problem including Coulomb tensor interaction
Ortakaya, Sami, E-mail: sami.ortakaya@yahoo.com
2013-11-15T23:59:59.000Z
The approximate solutions of Dirac equation with Morse potential in the presence of Coulomb-like tensor potential are obtained by using Laplace transform (LT) approach. The energy eigenvalue equation of the Dirac particles is found and some numerical results are obtained. By using convolution integral, the corresponding radial wave functions are presented in terms of confluent hypergeometric functions. -- Highlights: •The Dirac equation with tensor interaction is solved by using Laplace transform. •For solving this equation, we introduce the equation-transform model. •Numerical results and plots for pseudospin and spin symmetric solutions are given. •The obtained numerical results by using transform method are compared with orthogonal polynomial method.
H. J. Haubold; D. Kumar
2007-08-16T23:59:59.000Z
The Maxwell-Boltzmannian approach to nuclear reaction rate theory is extended to cover Tsallis statistics (Tsallis, 1988) and more general cases of distribution functions. An analytical study of respective thermonuclear functions is being conducted with the help of statistical techniques. The pathway model, recently introduced by Mathai (2005), is utilized for thermonuclear functions and closed-form representations are obtained in terms of H-functions and G-functions. Maxwell-Boltzmannian thermonuclear functions become particular cases of the extended thermonuclear functions. A brief review on the development of the theory of analytic representations of nuclear reaction rates is given.
Pouly, Amaury; Graça, Daniel S
2012-01-01T23:59:59.000Z
\\emph{Are analog models of computations more powerful than classical models of computations?} From a series of recent papers, it is now clear that many realistic analog models of computations are provably equivalent to classical digital models of computations from a \\emph{computability} point of view. Take, for example, the probably most realistic model of analog computation, the General Purpose Analog Computer (GPAC) model from Claude Shannon, a model for Differential Analyzers, which are analog machines used from 1930s to early 1960s to solve various problems. It is now known that functions computable by Turing machines are provably exactly those that are computable by GPAC. This paper is about next step: understanding if this equivalence also holds at the \\emph{complexity} level. In this paper we show that the realistic models of analog computation -- namely the General Purpose Analog Computer (GPAC) -- can simulate Turing machines in a computationally efficient manner. More concretely we show that, modulo...
Modeling-Computer Simulations At Valles Caldera - Redondo Geothermal...
Modeling-Computer Simulations Activity Date 1987 - 1995 Usefulness useful DOE-funding Unknown Notes A modification of the Aki-Lamer method was used to model the amplitude data....
Towards Real Earth Models --Computational Geophysics on Unstructured Tetrahedral Meshes?
Farquharson, Colin G.
Towards Real Earth Models -- Computational Geophysics on Unstructured Tetrahedral Meshes? Colin tetrahedral meshes. EM geophysics on unstructured tetrahedral meshes. Disadvantages, difficulties, challenges. Conclusions. #12;Outline: Geological models! Advantages of unstructured tetrahedral meshes. EM geophysics
Nuclear Reactor/Hydrogen Process Interface Including the HyPEP Model
Steven R. Sherman
2007-05-01T23:59:59.000Z
The Nuclear Reactor/Hydrogen Plant interface is the intermediate heat transport loop that will connect a very high temperature gas-cooled nuclear reactor (VHTR) to a thermochemical, high-temperature electrolysis, or hybrid hydrogen production plant. A prototype plant called the Next Generation Nuclear Plant (NGNP) is planned for construction and operation at the Idaho National Laboratory in the 2018-2021 timeframe, and will involve a VHTR, a high-temperature interface, and a hydrogen production plant. The interface is responsible for transporting high-temperature thermal energy from the nuclear reactor to the hydrogen production plant while protecting the nuclear plant from operational disturbances at the hydrogen plant. Development of the interface is occurring under the DOE Nuclear Hydrogen Initiative (NHI) and involves the study, design, and development of high-temperature heat exchangers, heat transport systems, materials, safety, and integrated system models. Research and development work on the system interface began in 2004 and is expected to continue at least until the start of construction of an engineering-scale demonstration plant.
Computationally Efficient Cardiac Bioelectricity Models Toward Whole-Heart Simulation
Branicky, Michael S.
Computationally Efficient Cardiac Bioelectricity Models Toward Whole-Heart Simulation Nathan A of developing new insights and techniques in simulating the electrical behavior of the human heart. While very A computationally feasible whole-heart model could be invaluable in the study of human heart pathology
The Two Server Problem Models of Online Computation
Bein, Wolfgang
The Two Server Problem Models of Online Computation Results A Randomized Algorithm for Two Servers, James Oravec supported by NSF grant CCR-0312093 Wolfgang Bein A Randomized Algorithm for Two Servers in Cross Polytope S #12;The Two Server Problem Models of Online Computation Results The Randomized 2-Server
Applying High Performance Computing to Analyzing by Probabilistic Model Checking
Schneider, Carsten
Applying High Performance Computing to Analyzing by Probabilistic Model Checking Mobile Cellular on the use of high performance computing in order to analyze with the proba- bilistic model checker PRISM. The Figure Generation Script 22 2 #12;1. Introduction We report in this paper on the use of high performance
Predicting Vehicle Crashworthiness: Validation of Computer Models for
Berger, Jim
Predicting Vehicle Crashworthiness: Validation of Computer Models for Functional and Hierarchical. Cafeo, Chin-Hsu Lin, and Jian Tu Abstract The CRASH computer model simulates the effect of a vehicle colliding against different barrier types. If it accurately represents real vehicle crash- worthiness
Los Alamos CCS (Center for Computer Security) formal computer security model
Dreicer, J.S.; Hunteman, W.J. (Los Alamos National Lab., NM (USA))
1989-01-01T23:59:59.000Z
This paper provides a brief presentation of the formal computer security model currently being developed at the Los Alamos Department of Energy (DOE) Center for Computer Security (CCS). The initial motivation for this effort was the need to provide a method by which DOE computer security policy implementation could be tested and verified. The actual analytical model was a result of the integration of current research in computer security and previous modeling and research experiences. The model is being developed to define a generic view of the computer and network security domains, to provide a theoretical basis for the design of a security model, and to address the limitations of present models. Formal mathematical models for computer security have been designed and developed in conjunction with attempts to build secure computer systems since the early 70's. The foundation of the Los Alamos DOE CCS model is a series of functionally dependent probability equations, relations, and expressions. The mathematical basis appears to be justified and is undergoing continued discrimination and evolution. We expect to apply the model to the discipline of the Bell-Lapadula abstract sets of objects and subjects. 5 refs.
IEEE TRANSACTION ON VISUALIZATION AND COMPUTER GRAPHICS 1 Water Surface Modeling from A Single
Martin, Ralph R.
water brings unique challenges [15]. Major difficulties include it- s lack of matchable featuresIEEE TRANSACTION ON VISUALIZATION AND COMPUTER GRAPHICS 1 Water Surface Modeling from A Single and Phillip Willis Abstract--We introduce a video based approach for producing water surface models. Recent
A Vast Machine Computer Models, Climate Data, and the Politics of Global Warming
A Vast Machine Computer Models, Climate Data, and the Politics of Global Warming Paul N. Edwards models, climate data, and the politics of global warming / Paul N. Edwards. p. cm. Includes. Climatology--History. 3. Meteorology--History. 4. Climatology--Technological innovation. 5. Global temperature
Quantum mechanical Hamiltonian models of the computation process
Benioff, P.
1983-01-01T23:59:59.000Z
As noted in the proceedings of this conference it is of importance to determine if quantum mechanics imposes fundamental limits on the computation process. Some aspects of this problem have been examined by the development of different types of quantum mechanical Hamiltonian models of Turing machines. (Benioff 1980, 1982a, 1982b, 1982c). Turing machines were considered because they provide a standard representation of all digital computers. Thus, showing the existence of quantum mechanical models of all Turing machines is equivalent to showing the existence of quantum mechanical models of all digital computers. The types of models considered all had different properties. Some were constructed on two-dimensional lattices of quantum spin systems of spin 1/2 (Benioff 1982b, 1982c) or higher spins (Benioff 1980). All the models considered Turing machine computations which were made reversible by addition of a history tape. Quantum mechanical models of Bennett's reversible machines (Bennett 1973) in which the model makes a copy of the computation result and then erases the history and undoes the computation in lockstep to recover the input were also developed (Benioff 1982a). To avoid technical complications all the types of models were restricted to modelling an arbitrary but finite number of computation steps.
Computing the Electricity Market Equilibrium: Uses of market equilibrium models
Baldick, Ross
1 Computing the Electricity Market Equilibrium: Uses of market equilibrium models Ross Baldick Abstract--In this paper we consider the formulation and uses of electric- ity market equilibrium models. Keywords--Electricity market, Equilibrium models I. INTRODUCTION Electricity market equilibrium modelling
Modeling Time in Computing: A Taxonomy and a Comparative Survey
Carlo A. Furia; Dino Mandrioli; Angelo Morzenti; Matteo Rossi
2010-10-11T23:59:59.000Z
The increasing relevance of areas such as real-time and embedded systems, pervasive computing, hybrid systems control, and biological and social systems modeling is bringing a growing attention to the temporal aspects of computing, not only in the computer science domain, but also in more traditional fields of engineering. This article surveys various approaches to the formal modeling and analysis of the temporal features of computer-based systems, with a level of detail that is suitable also for non-specialists. In doing so, it provides a unifying framework, rather than just a comprehensive list of formalisms. The paper first lays out some key dimensions along which the various formalisms can be evaluated and compared. Then, a significant sample of formalisms for time modeling in computing are presented and discussed according to these dimensions. The adopted perspective is, to some extent, historical, going from "traditional" models and formalisms to more modern ones.
Applications to Computer Closed Network Model
Shihada, Basem
Â· Suitable for modeling "virtual circuit" (VC) ith i d fl t lwith window flow control Â· Data sources/sinks are modeled explicitly 2 #12;Model of a VC with Window Flow Control 3 Model of a VC with Window Flow Control packets are individually acknowledged 4 #12;Model of a VC with Window Flow Control Â· A customer entering
Lagi, Marco; Bertrand, Karla Z; Bar-Yam, Yaneer
2012-01-01T23:59:59.000Z
Increases in global food prices have led to widespread hunger and social unrest---and an imperative to understand their causes. In a previous paper published in September 2011, we constructed for the first time a dynamic model that quantitatively agreed with food prices. Specifically, the model fit the FAO Food Price Index time series from January 2004 to March 2011, inclusive. The results showed that the dominant causes of price increases during this period were investor speculation and ethanol conversion. The model included investor trend following as well as shifting between commodities, equities and bonds to take advantage of increased expected returns. Here, we extend the food prices model to January 2012, without modifying the model but simply continuing its dynamics. The agreement is still precise, validating both the descriptive and predictive abilities of the analysis. Policy actions are needed to avoid a third speculative bubble that would cause prices to rise above recent peaks by the end of 2012.
Simulated movement of musculature in a computer generated model
Ten Wolde, Kristian Bernard
2000-01-01T23:59:59.000Z
Designing a computer generated character involves many steps, including the structure that is responsible for moving the character in a organic manner. There are several ways to develop a character to control the motion exhibited by the skin...
ADVANCED COMPUTATIONAL MODEL FOR THREE-PHASE SLURRY REACTORS
Goodarz Ahmadi
2000-11-01T23:59:59.000Z
In the first year of the project, solid-fluid mixture flows in ducts and passages at different angle of orientations were analyzed. The model predictions are compared with the experimental data and good agreement was found. Progress was also made in analyzing the gravity chute flows of solid-liquid mixtures. An Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column is being developed. The approach uses an Eulerian analysis of gas liquid flows in the bubble column, and makes use of the Lagrangian particle tracking procedure to analyze the particle motions. Progress was also made in developing a rate dependent thermodynamically consistent model for multiphase slurry flows in a state of turbulent motion. The new model includes the effect of phasic interactions and leads to anisotropic effective phasic stress tensors. Progress was also made in measuring concentration and velocity of particles of different sizes near a wall in a duct flow. The formulation of a thermodynamically consistent model for chemically active multiphase solid-fluid flows in a turbulent state of motion was also initiated. The general objective of this project is to provide the needed fundamental understanding of three-phase slurry reactors in Fischer-Tropsch (F-T) liquid fuel synthesis. The other main goal is to develop a computational capability for predicting the transport and processing of three-phase coal slurries. The specific objectives are: (1) To develop a thermodynamically consistent rate-dependent anisotropic model for multiphase slurry flows with and without chemical reaction for application to coal liquefaction. Also to establish the material parameters of the model. (2) To provide experimental data for phasic fluctuation and mean velocities, as well as the solid volume fraction in the shear flow devices. (3) To develop an accurate computational capability incorporating the new rate-dependent and anisotropic model for analyzing reacting and nonreacting slurry flows, and to solve a number of technologically important problems related to Fischer-Tropsch (F-T) liquid fuel production processes. (4) To verify the validity of the developed model by comparing the predicted results with the performed and the available experimental data under idealized conditions.
D’Abramo, Marco [Supercomputing Applications and Innovation, CINECA, Via dei Tizii, 6, 00185 Rome (Italy) [Supercomputing Applications and Innovation, CINECA, Via dei Tizii, 6, 00185 Rome (Italy); Dipartimento di Chimica, Universitá Sapienza, P.le Aldo Moro, 5, 00185, Rome (Italy); Aschi, Massimiliano [Department of Physical and Chemical Sciences, University of Aquila, via Vetoio (Coppito 1), 67010 Aquila (Italy)] [Department of Physical and Chemical Sciences, University of Aquila, via Vetoio (Coppito 1), 67010 Aquila (Italy); Amadei, Andrea, E-mail: andrea.amadei@uniroma2.it [Dipartimento di Scienze e Tecnologie Chimiche Universita’ di Roma, Tor Vergata, via della Ricerca Scientifica 1, I-00133 Roma (Italy)] [Dipartimento di Scienze e Tecnologie Chimiche Universita’ di Roma, Tor Vergata, via della Ricerca Scientifica 1, I-00133 Roma (Italy)
2014-04-28T23:59:59.000Z
Here, we extend a recently introduced theoretical-computational procedure [M. D’Alessandro, M. Aschi, C. Mazzuca, A. Palleschi, and A. Amadei, J. Chem. Phys. 139, 114102 (2013)] to include quantum vibrational transitions in modelling electronic spectra of atomic molecular systems in condensed phase. The method is based on the combination of Molecular Dynamics simulations and quantum chemical calculations within the Perturbed Matrix Method approach. The main aim of the presented approach is to reproduce as much as possible the spectral line shape which results from a subtle combination of environmental and intrinsic (chromophore) mechanical-dynamical features. As a case study, we were able to model the low energy UV-vis transitions of pyrene in liquid acetonitrile in good agreement with the experimental data.
Clique-detection Models in Computational Biochemistry and Genomics
Butenko, Sergiy
Clique-detection Models in Computational Biochemistry and Genomics S. Butenko and W. E. Wilhelm,wilhelm}@tamu.edu Abstract Many important problems arising in computational biochemistry and genomics have been formulated and genomic aspects of the problems as well as to the graph-theoretic aspects of the solution approaches. Each
Inverse Modelling in Geology by Interactive Evolutionary Computation
Boschetti, Fabio
Inverse Modelling in Geology by Interactive Evolutionary Computation Chris Wijns a,b,, Fabio of geological processes, in the absence of established numerical criteria to act as inversion targets, requires evolutionary computation provides for the inclusion of qualitative geological expertise within a rigorous
Computational Modeling and Optimization of Proton Exchange Membrane Fuel Cells
Victoria, University of
Computational Modeling and Optimization of Proton Exchange Membrane Fuel Cells by Marc Secanell and Optimization of Proton Exchange Membrane Fuel Cells by Marc Secanell Gallart Bachelor in Engineering cells. In this thesis, a computational framework for fuel cell analysis and optimization is presented
COMPUTATIONAL FLUID DYNAMICS MODELING OF SOLID OXIDE FUEL CELLS
COMPUTATIONAL FLUID DYNAMICS MODELING OF SOLID OXIDE FUEL CELLS Ugur Pasaogullari and Chao-dimensional model has been developed to simulate solid oxide fuel cells (SOFC). The model fully couples current density operation. INTRODUCTION Solid oxide fuel cells (SOFC) are among possible candidates
A Variable Refrigerant Flow Heat Pump Computer Model in EnergyPlus
Raustad, Richard A. [Florida Solar Energy Center
2013-01-01T23:59:59.000Z
This paper provides an overview of the variable refrigerant flow heat pump computer model included with the Department of Energy's EnergyPlusTM whole-building energy simulation software. The mathematical model for a variable refrigerant flow heat pump operating in cooling or heating mode, and a detailed model for the variable refrigerant flow direct-expansion (DX) cooling coil are described in detail.
Zhang, Xuesong
2009-05-15T23:59:59.000Z
? weights for river stage prediction (Chau, 2006). Other evolutionary algorithms, such as Differential Evaluation (DE) (Storn and Price, 1997) and Artificial Immune Systems (AIS) (de Castro and Von Zuben, 2002a; de Castro and Von Zuben, 2002b), although... is to structure the hydrologic model as a probability model, then the confidence interval of model output can be computed (Montanari et al., 1997). Representative methods of this category include Markov Chain Monte Carlo (MCMC) and a Generalized Likelihood...
Gering, Kevin L.
2013-01-01T23:59:59.000Z
A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a Butler-Volmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulse-time dependence, electrode surface availability, or a combination thereof. A set of sigmoid-based expressions may be included with the modified-BV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modified-BV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.
Language acquisition and implication for language change: A computational model.
Clark, Robert A J
1997-01-01T23:59:59.000Z
Computer modeling techniques, when applied to language acquisition problems, give an often unrealized insight into the diachronic change that occurs in language over successive generations. This paper shows that using ...
Computational Model of Forward and Opposed Smoldering Combustion in Microgravity
Rein, Guillermo; Fernandez-Pello, Carlos; Urban, David
2006-08-06T23:59:59.000Z
A novel computational model of smoldering combustion capable of predicting both forward and opposed propagation is developed. This is accomplished by considering the one-dimensional, transient, governing equations for ...
Computationally Efficient Modeling of High-Efficiency Clean Combustion...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Meeting, June 7-11, 2010 -- Washington D.C. ace012aceves2010o.pdf More Documents & Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines...
Journal of Computational Acoustics, FREQUENCY DOMAIN WAVE PROPAGATION MODELLING
Sheen, Dongwoo
#11;ect de gas, brine or oil and gas-brine or gas-oil pore uids on seismic velocities. NumericalJournal of Computational Acoustics, f c IMACS FREQUENCY DOMAIN WAVE PROPAGATION MODELLING
15.094 Systems Optimization: Models and Computation, Spring 2002
Freund, Robert Michael
A computational and application-oriented introduction to the modeling of large-scale systems in a wide variety of decision-making domains and the optimization of such systems using state-of-the-art optimization software. ...
Whitney, Daniel
to include concurrent engineering.1 In both cases, the Unit has established strong ties with the Computer with defining product data models that will support concurrent engineering. Both fabrication and assembly in manufacturing must be structured like concurrent engineering activities: the users of the research must be part
ADVANCED COMPUTATIONAL MODEL FOR THREE-PHASE SLURRY REACTORS
Goodarz Ahmadi
2004-10-01T23:59:59.000Z
In this project, an Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column was developed. The approach used an Eulerian analysis of liquid flows in the bubble column, and made use of the Lagrangian trajectory analysis for the bubbles and particle motions. The bubble-bubble and particle-particle collisions are included the model. The model predictions are compared with the experimental data and good agreement was found An experimental setup for studying two-dimensional bubble columns was developed. The multiphase flow conditions in the bubble column were measured using optical image processing and Particle Image Velocimetry techniques (PIV). A simple shear flow device for bubble motion in a constant shear flow field was also developed. The flow conditions in simple shear flow device were studied using PIV method. Concentration and velocity of particles of different sizes near a wall in a duct flow was also measured. The technique of Phase-Doppler anemometry was used in these studies. An Eulerian volume of fluid (VOF) computational model for the flow condition in the two-dimensional bubble column was also developed. The liquid and bubble motions were analyzed and the results were compared with observed flow patterns in the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were also analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures were also studied. The simulation results were compared with the experimental data and discussed A thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion was developed. The balance laws were obtained and the constitutive laws established.
Sandia National Laboratories: Computational Modeling & Simulation
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
primary purpose is to model severe-accident progression in light-water-reactor (LWR) nuclear power plants. Sandia developed MELCOR for the US Nuclear Regulatory ... Sandian...
Sandia National Laboratories: Computational Modeling & Simulation
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
and atmospheric chemistry that is expected to benefit auto and engine manufacturers, oil and gas utilities, and other industries that employ combustion models. A paper...
Computer Modelling of Pigeon Navigation according to the "Map and Compass"-Model
Nehmzow, Ulrich
Computer Modelling of Pigeon Navigation according to the "Map and Compass"-Model Ulrich Nehmzow@zoology.uni-frankfurt.de Abstract This paper presents a computer model of pigeon navigation (homing), based on Kramer's map-and-compass intersecting gradients which are used by the birds to determine the correct compass heading for home
Modeling-Computer Simulations At U.S. West Region (Williams ...
Modeling-Computer Simulations At U.S. West Region (Williams & Deangelo, 2008) Exploration Activity Details Location U.S. West Region Exploration Technique Modeling-Computer...
Computational modeling of metal-organic frameworks
Sung, Jeffrey Chuen-Fai; Sung, Jeffrey Chuen-Fai
2012-01-01T23:59:59.000Z
listed in Table 2.1. The SPC (Single Point Charge) family ofvdW parameters. A ?exible variant of the SPC water modelis the SPC/Fw model of Voth,m which adds harmonic bond and
Gering, Kevin L
2013-08-27T23:59:59.000Z
A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constant-current pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constant-current pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.
Non-linear regression models for Approximate Bayesian Computation
Robert, Christian P.
Non-linear regression models for Approximate Bayesian Computation (ABC) Michael Blum Olivier ABC #12;Blum and OF (2009) suggest the use of non-linear conditional heteroscedastic regression models) Linear regression-based ABC can sometimes be improved #12;abc of ABC Using stochastic simulations
Personal Computer-Based Model for Cool Storage Performance Simulation
Kasprowicz, L. M.; Jones, J. W.; Hitzfelder, J.
1990-01-01T23:59:59.000Z
A personal computer based hourly simulation model was developed based on the CBS/ICE routines in the DOE-2.1 mainframe building simulation software. The menu driven new model employs more efficient data and information handling than the previous...
Computational Modeling of Brain Dynamics during Repetitive Head Motions
Burtscher, Martin
Computational Modeling of Brain Dynamics during Repetitive Head Motions Igor Szczyrba School the HIC scale to arbitrary head motions. Our simulations of the brain dynamics in sagittal and horizontal injury modeling, resonance effects 1 Introduction A rapid head motion can result in a severe brain injury
MODELS AND METRICS FOR ENERGY-EFFICIENT COMPUTER SYSTEMS
Kozyrakis, Christos
MODELS AND METRICS FOR ENERGY-EFFICIENT COMPUTER SYSTEMS A DISSERTATION SUBMITTED TO THE DEPARTMENT promising energy-efficient technolo- gies, and models to understand the effects of resource utilization decisions on power con- sumption. To facilitate energy-efficiency improvements, this dissertation presents
Nuclear shell-model code for massive parallel computation, "KSHELL"
Noritaka Shimizu
2013-10-21T23:59:59.000Z
A new code for nuclear shell-model calculations, "KSHELL", is developed. It aims at carrying out both massively parallel computation and single-node computation in the same manner. We solve the Schr\\"{o}dinger's equation in the $M$-scheme shell-model model space, utilizing Thick-Restart Lanczos method. During the Lanczos iteration, the whole Hamiltonian matrix elements are generated "on-the-fly" in every matrix-vector multiplication. The vectors of the Lanczos method are distributed and stored on memory of each parallel node. We report that the newly developed code has high parallel efficiency on FX10 supercomputer and a PC with multi-cores.
Methodology for characterizing modeling and discretization uncertainties in computational simulation
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01T23:59:59.000Z
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
Computational social network modeling of terrorist recruitment.
Berry, Nina M.; Turnley, Jessica Glicken (Sandia National Laboratories, Albuquerque, NM); Smrcka, Julianne D. (Sandia National Laboratories, Albuquerque, NM); Ko, Teresa H.; Moy, Timothy David (Sandia National Laboratories, Albuquerque, NM); Wu, Benjamin C.
2004-10-01T23:59:59.000Z
The Seldon terrorist model represents a multi-disciplinary approach to developing organization software for the study of terrorist recruitment and group formation. The need to incorporate aspects of social science added a significant contribution to the vision of the resulting Seldon toolkit. The unique addition of and abstract agent category provided a means for capturing social concepts like cliques, mosque, etc. in a manner that represents their social conceptualization and not simply as a physical or economical institution. This paper provides an overview of the Seldon terrorist model developed to study the formation of cliques, which are used as the major recruitment entity for terrorist organizations.
Protein Models Comparator Scalable Bioinformatics Computing on
Krasnogor, Natalio
of parameters of energy functions used in template-free modelling and refinement. Although many protein Engine cloud platform and is a showcase of how the emerging PaaS (Platform as a Service) technology could, the predicted structure is compared against the target native structure. This type of evaluation is performed
Models of quantum computation and quantum programming languages
J. A. Miszczak
2011-12-03T23:59:59.000Z
The goal of the presented paper is to provide an introduction to the basic computational models used in quantum information theory. We review various models of quantum Turing machine, quantum circuits and quantum random access machine (QRAM) along with their classical counterparts. We also provide an introduction to quantum programming languages, which are developed using the QRAM model. We review the syntax of several existing quantum programming languages and discuss their features and limitations.
Integrated Multiscale Modeling of Molecular Computing Devices
Gregory Beylkin
2012-03-23T23:59:59.000Z
Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for the free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.
Computational modeling of materials processing and processes
Lowe, T.C.; Zhu, Yuntian; Bingert, J.F. [and others
1998-12-31T23:59:59.000Z
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). Anisotropic mechanical properties of densified BSCCO powders are of paramount importance during thermo-mechanical processing of superconducting tapes and wires. Maximum current transport requires high relative density and a high degree of alignment of the single crystal planes of the BSCCO. Unfortunately this configuration causes high stresses that can lead to cracking, and thus reduce the density, and the conductive properties of the tape. The current work develops a micromechanical material mode to model is calibrated and compared to experimental results, and then employed to analyze the effects of initial texture and confinement pressure and shear strains in the core of oxide powder-in-tube (OPIT) processed tapes are calculated by finite-element analysis. The calculated deformations were then applied as boundary conditions to the micromechanical model. Our calculated results were used to interpret a set of prototypical rolling experiments. 11 refs., 5 figs.
A gas kick model for the personal computer
Miller, Clayton Lowell
1987-01-01T23:59:59.000Z
A GAS KICK MODEL FOR THE PERSONAL COMPUTER A Thesis by CLAYTON LOWELL MILLER Submitted to the Graduate College of Texas A6M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE May 1987 Major Subject...: Petroleum Engineering A GAS KICK MODEL FOR THE PERSONAL COMPUTER A Thesis by CLAYTON LOWELL MILLER Approved as to style and content by: Wana C. vkam-Wold (Chair f Committee) Robert W. Heine (Member) Tibor G. Rozg yi (Member) W. D. Von Gonten Head...
Simulation models for computational plasma physics: Concluding report
Hewett, D.W.
1994-03-05T23:59:59.000Z
In this project, the authors enhanced their ability to numerically simulate bounded plasmas that are dominated by low-frequency electric and magnetic fields. They moved towards this goal in several ways; they are now in a position to play significant roles in the modeling of low-frequency electromagnetic plasmas in several new industrial applications. They have significantly increased their facility with the computational methods invented to solve the low frequency limit of Maxwell`s equations (DiPeso, Hewett, accepted, J. Comp. Phys., 1993). This low frequency model is called the Streamlined Darwin Field model (SDF, Hewett, Larson, and Doss, J. Comp. Phys., 1992) has now been implemented in a fully non-neutral SDF code BEAGLE (Larson, Ph.D. dissertation, 1993) and has further extended to the quasi-neutral limit (DiPeso, Hewett, Comp. Phys. Comm., 1993). In addition, they have resurrected the quasi-neutral, zero-electron-inertia model (ZMR) and began the task of incorporating internal boundary conditions into this model that have the flexibility of those in GYMNOS, a magnetostatic code now used in ion source work (Hewett, Chen, ICF Quarterly Report, July--September, 1993). Finally, near the end of this project, they invented a new type of banded matrix solver that can be implemented on a massively parallel computer -- thus opening the door for the use of all their ADI schemes on these new computer architecture`s (Mattor, Williams, Hewett, submitted to Parallel Computing, 1993).
Integrated Multiscale Modeling of Molecular Computing Devices
Weinan E
2012-03-29T23:59:59.000Z
The main bottleneck in modeling transport in molecular devices is to develop the correct formulation of the problem and efficient algorithms for analyzing the electronic structure and dynamics using, for example, the time-dependent density functional theory. We have divided this task into several steps. The first step is to developing the right mathematical formulation and numerical algorithms for analyzing the electronic structure using density functional theory. The second step is to study time-dependent density functional theory, particularly the far-field boundary conditions. The third step is to study electronic transport in molecular devices. We are now at the end of the first step. Under DOE support, we have made subtantial progress in developing linear scaling and sub-linear scaling algorithms for electronic structure analysis. Although there has been a huge amount of effort in the past on developing linear scaling algorithms, most of the algorithms developed suffer from the lack of robustness and controllable accuracy. We have made the following progress: (1) We have analyzed thoroughly the localization properties of the wave-functions. We have developed a clear understanding of the physical as well as mathematical origin of the decay properties. One important conclusion is that even for metals, one can choose wavefunctions that decay faster than any algebraic power. (2) We have developed algorithms that make use of these localization properties. Our algorithms are based on non-orthogonal formulations of the density functional theory. Our key contribution is to add a localization step into the algorithm. The addition of this localization step makes the algorithm quite robust and much more accurate. Moreover, we can control the accuracy of these algorithms by changing the numerical parameters. (3) We have considerably improved the Fermi operator expansion (FOE) approach. Through pole expansion, we have developed the optimal scaling FOE algorithm.
LMFBR models for the ORIGEN2 computer code
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1983-06-01T23:59:59.000Z
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
Computational model for high-energy laser-cutting process
Kim, M.J.; Majumdar, P. [Northern Illinois Univ., DeKalb, IL (United States). Dept. of Mechanical Engineering
1995-06-01T23:59:59.000Z
A computational model for the simulation of a laser-cutting process has been developed using a finite element method. A transient heat transfer model is considered that deals with the material-cutting process using a Gaussian continuous wave laser beam. Numerical experimentation is carried out for mesh refinements and the rate of convergence in terms of groove shape and temperature. Results are also presented for the prediction of groove depth with different moving speeds.
Computer Modeling of Carbon Metabolism Enables Biofuel Engineering (Fact Sheet)
Not Available
2011-09-01T23:59:59.000Z
In an effort to reduce the cost of biofuels, the National Renewable Energy Laboratory (NREL) has merged biochemistry with modern computing and mathematics. The result is a model of carbon metabolism that will help researchers understand and engineer the process of photosynthesis for optimal biofuel production.
Thermal building simulation and computer generation of nodal models
Paris-Sud XI, Université de
Thermal building simulation and computer generation of nodal models H. BOYER, J.P. CHABRIAT, B in the development of several packages simulating the dynamic behaviour of buildings. This paper shows the adaptation. This article shows the chosen method in the case of our thermal simulation program for buildings, CODYRUN. #12
Call for Papers ACM Transactions on Modeling and Computer Simulation
L'Ecuyer, Pierre
simulation, improving the efficiency of simulations for those large systems, building effective and flexibleCall for Papers ACM Transactions on Modeling and Computer Simulation Special Issue on Simulation Pierre L'Ecuyer, University of Montreal In connection with the 2011 INFORMS Simulation Society Research
RESEARCH ARTICLE Open Access Computational modelling elucidates the
Paris-Sud XI, Université de
to the digestive, reproductive and respiratory systems of vertebrates [1]. Mobile or immotile cilia exist on everyRESEARCH ARTICLE Open Access Computational modelling elucidates the mechanism of ciliary regulation, nephronophthisis, situs inversus pathology or infertility. The mechanism of cilia beating regulation is complex
Global sensitivity analysis of computer models with functional inputs
Boyer, Edmond
-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some input variables computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We). The "mean model" allows to estimate the sensi- tivity indices of each scalar input variables, while
Learning words from sights and sounds: a computational model
Learning words from sights and sounds: a computational model Deb K. Roy*, Alex P. Pentland MIT.: 1-617-253-0596; fax: 1-617-253-8874. E-mail address: dkroy@media.mit.edu (D.K. Roy). http
Computational model of aortic valve surgical repair using grafted pericardium
1 Computational model of aortic valve surgical repair using grafted pericardium Peter E. Hammer1, aortic valve repair, membrane, surgical planning, leaflet graft, pericardium ABSTRACT Aortic valve leaflets. Difficulty is largely due to the complex geometry and function of the valve and the lower
Science Challenge Computational modeling of ultrafast digital electronics
Freericks, Jim
properties in response to the needs of a particular device or situation. These smart electronics have the potential to lead to entirely new generations of electronic devices--such as military and civilian Science Challenge Â Computational modeling of ultrafast digital electronics Â· To understand how
Computational Modeling of Pancreatic Cancer Reveals Kinetics of Metastasis
Theory Computational Modeling of Pancreatic Cancer Reveals Kinetics of Metastasis Suggesting and size distribution of metas- tases as well as patient survival. These findings were validated death and one of the most aggressive malignancies in humans, with a five-year relative survival rate
innovati nNREL Computer Models Integrate Wind Turbines with
innovati nNREL Computer Models Integrate Wind Turbines with Floating Platforms Far off the shores of energy-hungry coastal cities, powerful winds blow over the open ocean, where the water is too deep for today's seabed-mounted offshore wind turbines. For the United States to tap into these vast offshore
Computational Model for Forced Expiration from Asymmetric Normal Lungs
Lutchen, Kenneth
Computational Model for Forced Expiration from Asymmetric Normal Lungs ADAM G. POLAK 1 losses along the airway branches. Calculations done for succeeding lung volumes result in the semidynamic to the choke points, characteristic differences of lung regional pressures and volumes, and a shape
Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code
Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T
1985-04-01T23:59:59.000Z
This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.
Friedman, Carey
We use the global 3-D chemical transport model GEOS-Chem to simulate long-range atmospheric transport of polycyclic aromatic hydrocarbons (PAHs). To evaluate the model’s ability to simulate PAHs with different volatilities, ...
A New Perspective for the Calibration of Computational Predictor Models.
Crespo, Luis Guillermo
2014-11-01T23:59:59.000Z
This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).
Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.
2010-05-04T23:59:59.000Z
A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.
Loya, Sudarshan Kedarnath
2011-12-31T23:59:59.000Z
, this work includes the history of the fundamental reactions of automotive catalysts including carbon monoxide (CO), hydrogen (H2) and nitric oxide (NO) oxidation on a widely used material formulation (platinum catalyst on alumina washcoat). A detailed report...
Mafalda Dias; Jonathan Frazer; David Seery
2015-02-10T23:59:59.000Z
We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.
Dias, Mafalda; Seery, David
2015-01-01T23:59:59.000Z
We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.
A Computational Model of Limb Impedance Control Based on Principles of Internal Model Uncertainty
Vijayakumar, Sethu
of Informatics, University of Edinburgh, Edinburgh, United Kingdom, 2 ATR Computational Neuroscience Laboratories uncertainties, along with energy and accuracy demands. The insights from this computational model could be used. This is an effortless task, however if suddenly a seemingly random wind gust perturbs the umbrella, you will typically
The origins of computer weather prediction and climate modeling
Lynch, Peter [Meteorology and Climate Centre, School of Mathematical Sciences, University College Dublin, Belfield (Ireland)], E-mail: Peter.Lynch@ucd.ie
2008-03-20T23:59:59.000Z
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.
Action Models: A Reliability Modeling Formalism for Fault-Tolerant Distributed Computing Systems
Newcastle upon Tyne, University of
Action Models: A Reliability Modeling Formalism for Fault-Tolerant Distributed Computing Systems. Introduction Model-based evaluation of the reliability of distributed systems has traditionally required expert- proach to analyze the reliability of fault-tolerant distributed systems. More in particular, we want
Final Report for Integrated Multiscale Modeling of Molecular Computing Devices
Glotzer, Sharon C.
2013-08-28T23:59:59.000Z
In collaboration with researchers at Vanderbilt University, North Carolina State University, Princeton and Oakridge National Laboratory we developed multiscale modeling and simulation methods capable of modeling the synthesis, assembly, and operation of molecular electronics devices. Our role in this project included the development of coarse-grained molecular and mesoscale models and simulation methods capable of simulating the assembly of millions of organic conducting molecules and other molecular components into nanowires, crossbars, and other organized patterns.
MaRIE theory, modeling and computation roadmap executive summary
Lookman, Turab [Los Alamos National Laboratory
2010-01-01T23:59:59.000Z
The confluence of MaRIE (Matter-Radiation Interactions in Extreme) and extreme (exascale) computing timelines offers a unique opportunity in co-designing the elements of materials discovery, with theory and high performance computing, itself co-designed by constrained optimization of hardware and software, and experiments. MaRIE's theory, modeling, and computation (TMC) roadmap efforts have paralleled 'MaRIE First Experiments' science activities in the areas of materials dynamics, irradiated materials and complex functional materials in extreme conditions. The documents that follow this executive summary describe in detail for each of these areas the current state of the art, the gaps that exist and the road map to MaRIE and beyond. Here we integrate the various elements to articulate an overarching theme related to the role and consequences of heterogeneities which manifest as competing states in a complex energy landscape. MaRIE experiments will locate, measure and follow the dynamical evolution of these heterogeneities. Our TMC vision spans the various pillar science and highlights the key theoretical and experimental challenges. We also present a theory, modeling and computation roadmap of the path to and beyond MaRIE in each of the science areas.
Paris-Sud XI, Université de
Non-linear inversion modeling for Ultrasound Computer Tomography: transition from soft to hard Marseille cedex 20, France ABSTRACT Ultrasound Computer Tomography (UCT) is an imaging technique which has experiments. Keyword: Ultrasound Computer Tomography, Inverse Born Approximation, Elliptical Projection
Computationally Efficient Use of Derivatives in Emulation of Complex Computational Models
Williams, Brian J. [Los Alamos National Laboratory; Marcy, Peter W. [University of Wyoming
2012-06-07T23:59:59.000Z
We will investigate the use of derivative information in complex computer model emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the ten-dimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.
Final Report: Center for Programming Models for Scalable Parallel Computing
Mellor-Crummey, John [William Marsh Rice University] [William Marsh Rice University
2011-09-13T23:59:59.000Z
As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.
Simple proof of equivalence between adiabatic quantum computation and the circuit model
Ari Mizel; Daniel A. Lidar; Morgan Mitchell
2007-02-26T23:59:59.000Z
We prove the equivalence between adiabatic quantum computation and quantum computation in the circuit model. An explicit adiabatic computation procedure is given that generates a ground state from which the answer can be extracted. The amount of time needed is evaluated by computing the gap. We show that the procedure is computationally efficient.
Computer Modeling of Violent Intent: A Content Analysis Approach
Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.
2014-01-03T23:59:59.000Z
We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.
Comprehensive computer model for magnetron sputtering. II. Charged particle transport
Jimenez, Francisco J., E-mail: fjimenez@ualberta.ca; Dew, Steven K. [Department of Electrical and Computer Engineering, University of Alberta, Edmonton T6G 2V4 (Canada); Field, David J. [Smith and Nephew (Alberta) Inc., Fort Saskatchewan T8L 4K4 (Canada)
2014-11-01T23:59:59.000Z
Discharges for magnetron sputter thin film deposition systems involve complex plasmas that are sensitively dependent on magnetic field configuration and strength, working gas species and pressure, chamber geometry, and discharge power. The authors present a numerical formulation for the general solution of these plasmas as a component of a comprehensive simulation capability for planar magnetron sputtering. This is an extensible, fully three-dimensional model supporting realistic magnetic fields and is self-consistently solvable on a desktop computer. The plasma model features a hybrid approach involving a Monte Carlo treatment of energetic electrons and ions, along with a coupled fluid model for thermalized particles. Validation against a well-known one-dimensional system is presented. Various strategies for improving numerical stability are investigated as is the sensitivity of the solution to various model and process parameters. In particular, the effect of magnetic field, argon gas pressure, and discharge power are studied.
EconoGrid: A detailed Simulation Model of a Standards-based Grid Compute Economy
EconoGrid: A detailed Simulation Model of a Standards-based Grid Compute Economy EconoGrid is a detailed simulation model, implemented in SLX1 , of a grid compute economy that implements selected of users. In a grid compute economy, computing resources are sold to users in a market where price
Shekhar, Ravi
2009-05-15T23:59:59.000Z
and amplitude variation with offset (AVO) results for our example model predicts that CO2 is easier to detect than brine in the fractured reservoirs. The effects of geochemical processes on seismics are simulated by time-lapse modeling for t = 1000 years. My...
Medical Nuclear Supply Chain Design: A Tractable Network Model and Computational
Nagurney, Anna
Medical Nuclear Supply Chain Design: A Tractable Network Model and Computational Approach Anna Medical Nuclear Supply Chain Design #12;Outline Background and Motivation Supply Chain Challenges The Medical Nuclear Supply Chain Network Design Model The Computational Approach Summary and Suggestions
S. Silich; J. Franco
1999-05-13T23:59:59.000Z
Here we present a possible solution to the apparent discrepancy between the observed properties of LMC bubbles and the standard, constant density bubble model. A two-dimensional model of a wind-driven bubble expanding from a flattened giant molecular cloud is examined. We conclude that the expansion velocities derived from spherically symmetric models are not always applicable to elongated young bubbles seen almost face-on due to the LMC orientation. In addition, an observational test to differentiate between spherical and elongated bubbles seen face-on is discussed.
Effective model for in-medium $\\bar{K}N$ interactions including the $L=1$ partial wave
Cieplý, Aleš
2015-01-01T23:59:59.000Z
Coupled channels model of meson-baryon interactions based on the effective chiral Lagrangian is extended to account explicitly for the $\\Sigma(1385)$ resonance that dominates the $P$-wave $\\bar{K}N$ and $\\pi\\Sigma$ interactions at energies below the $\\bar{K}N$ threshold. The presented model aims at a uniform treatment of the $\\Lambda(1405)$ and $\\Sigma(1385)$ dynamics in the nuclear medium. We demonstrate the applicability of the model by confronting its predictions with the vacuum scattering data, then we follow with discussing the impact of nuclear matter on the $\\pi\\Sigma$ mass spectrum and on the energy dependence of the $K^{-}p$ branching ratios.
Effective model for in-medium $\\bar{K}N$ interactions including the $L=1$ partial wave
Aleš Cieplý; Vojt?ch Krej?i?ík
2015-01-26T23:59:59.000Z
Coupled channels model of meson-baryon interactions based on the effective chiral Lagrangian is extended to account explicitly for the $\\Sigma(1385)$ resonance that dominates the $P$-wave $\\bar{K}N$ and $\\pi\\Sigma$ interactions at energies below the $\\bar{K}N$ threshold. The presented model aims at a uniform treatment of the $\\Lambda(1405)$ and $\\Sigma(1385)$ dynamics in the nuclear medium. We demonstrate the applicability of the model by confronting its predictions with the vacuum scattering data, then we follow with discussing the impact of nuclear matter on the $\\pi\\Sigma$ mass spectrum and on the energy dependence of the $K^{-}p$ branching ratios.
Goyal, Nitin, E-mail: nitin@unik.no [Carinthian Tech Research CTR AG, Europastraße 4/1, Technologiepark Villach, A-9524 Villach/St. Magdalen (Austria); Department of Electronics and Telecommunication, Norwegian University of Science and Technology, Trondheim NO7034 (Norway); Fjeldly, Tor A. [Department of Electronics and Telecommunication, Norwegian University of Science and Technology, Trondheim NO7034 (Norway)
2014-07-14T23:59:59.000Z
In this paper, a physics based analytical model is presented for calculation of the two-dimensional electron gas density and the bare surface barrier height of AlGaN/AlN/GaN material stacks. The presented model is based on the concept of distributed surface donor states and the self-consistent solution of Poisson equation at the different material interfaces. The model shows good agreement with the reported experimental data and can be used for the design and characterization of advanced GaN devices for power and radio frequency applications.
Not Available
2012-08-01T23:59:59.000Z
Cation degradation insights obtained by computational modeling could result in better performance and longer lifetime for alkaline membrane fuel cells.
Computational modeling of GTA (gas tungsten arc) welding with emphasis on surface tension effects
Zacharia, T.; David, S.A.
1990-01-01T23:59:59.000Z
A computational study of the convective heat transfer in the weld pool during gas tungsten arch (GTA) welding of Type 304 stainless steel is presented. The solution of the transport equations is based on a control volume approach which utilized directly, the integral form of the governing equations. The computational model considers buoyancy and electromagnetic and surface tension forces in the solution of convective heat transfer in the weld pool. In addition, the model treats the weld pool surface as a deformable free surface. The computational model includes weld metal vaporization and temperature dependent thermophysical properties. The results indicate that consideration of weld pool vaporization effects and temperature dependent thermophysical properties significantly influence the weld model predictions. Theoretical predictions of the weld pool surface temperature distributions and the cross-sectional weld pool size and shape wee compared with corresponding experimental measurements. Comparison of the theoretically predicted and the experimentally obtained surface temperature profiles indicated agreement with {plus minus} 8%. The predicted weld cross-section profiles were found to agree very well with actual weld cross-sections for the best theoretical models. 26 refs., 8 figs.
Zeng, Chong; Xia, Jianghai; Miller, Richard D.; Tsoflias, Georgios P.
2012-01-01T23:59:59.000Z
Rayleigh waves are generated along the free surface and their propagation can be strongly influenced by surface topography. Modeling of Rayleigh waves in the near surface in the presence of topography is fundamental to the study of surface waves...
Code description: A dynamic modelling strategy for Bayesian computer model emulation
West, Mike
Code description: A dynamic modelling strategy for Bayesian computer model emulation 1 Example data and code directory The example data is provided under the directory "mydata": Â· "design1.dat": this file2.dat": this file contains the 60 validation runs. The Matlab code is provided under the directory
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power Administration would like submit theInnovation PortalCompositionalCompressed Natural
FINITE VOLUME METHODS APPLIED TO THE COMPUTATIONAL MODELLING OF WELDING PHENOMENA
Taylor, Gary
1 FINITE VOLUME METHODS APPLIED TO THE COMPUTATIONAL MODELLING OF WELDING PHENOMENA Gareth A.Taylor@brunel.ac.uk ABSTRACT This paper presents the computational modelling of welding phenomena within a versatile numerical) and Computational Solid Mechanics (CSM). With regard to the CFD modelling of the weld pool fluid dynamics, heat
HIGH RESOLUTION FORWARD AND INVERSE EARTHQUAKE MODELING ON TERASCALE COMPUTERS
Shewchuk, Jonathan
highly populated seismic region in the U.S., it has well- characterized geological structures (including in characterizing earthquake source and basin material properties, a critical remaining challenge is to invert basin geology and earthquake sources, and to use this capability to model and forecast strong ground
ADVANCED COMPUTATIONAL MODEL FOR THREE-PHASE SLURRY REACTORS
Goodarz Ahmadi
2001-10-01T23:59:59.000Z
In the second year of the project, the Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column is further developed. The approach uses an Eulerian analysis of liquid flows in the bubble column, and makes use of the Lagrangian trajectory analysis for the bubbles and particle motions. An experimental set for studying a two-dimensional bubble column is also developed. The operation of the bubble column is being tested and diagnostic methodology for quantitative measurements is being developed. An Eulerian computational model for the flow condition in the two-dimensional bubble column is also being developed. The liquid and bubble motions are being analyzed and the results are being compared with the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures is also being studied. Further progress was also made in developing a thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion. The balance laws are obtained and the constitutive laws are being developed. Progress was also made in measuring concentration and velocity of particles of different sizes near a wall in a duct flow. The technique of Phase-Doppler anemometry was used in these studies. The general objective of this project is to provide the needed fundamental understanding of three-phase slurry reactors in Fischer-Tropsch (F-T) liquid fuel synthesis. The other main goal is to develop a computational capability for predicting the transport and processing of three-phase coal slurries. The specific objectives are: (1) To develop a thermodynamically consistent rate-dependent anisotropic model for multiphase slurry flows with and without chemical reaction for application to coal liquefaction. Also establish the material parameters of the model. (2) To provide experimental data for phasic fluctuation and mean velocities, as well as the solid volume fraction in the shear flow devices. (3) To develop an accurate computational capability incorporating the new rate-dependent and anisotropic model for analyzing reacting and nonreacting slurry flows, and to solve a number of technologically important problems related to Fischer-Tropsch (F-T) liquid fuel production processes. (4) To verify the validity of the developed model by comparing the predicted results with the performed and the available experimental data under idealized conditions.
Winters, W.S.
1984-01-01T23:59:59.000Z
An overview of the computer code TOPAZ (Transient-One-Dimensional Pipe Flow Analyzer) is presented. TOPAZ models the flow of compressible and incompressible fluids through complex and arbitrary arrangements of pipes, valves, flow branches and vessels. Heat transfer to and from the fluid containment structures (i.e. vessel and pipe walls) can also be modeled. This document includes discussions of the fluid flow equations and containment heat conduction equations. The modeling philosophy, numerical integration technique, code architecture, and methods for generating the computational mesh are also discussed.
Modeling-Computer Simulations (Laney, 2005) | Open Energy Information
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov YouKizildere I Geothermal Pwer Plant JumpMarysville,Missoula, Montana: EnergyAnalysis of Energy DemandModeling-Computer
Property:Data Comparison to Computational Models | Open Energy Information
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov YouKizildere I Geothermal PwerPerkins County, Nebraska:PrecourtOid Jump to: navigation, searchto Computational Models Jump
Claude Daviau; Jacques Bertrand
2014-08-27T23:59:59.000Z
A wave equation with mass term is studied for all particles and antiparticles of the first generation: electron and its neutrino, positron and antineutrino, quarks $u$ and $d$ with three states of color and antiquarks $\\overline{u}$ and $\\overline{d}$. This wave equation is form invariant under the $Cl_3^*$ group generalizing the relativistic invariance. It is gauge invariant under the $U(1)\\times SU(2) \\times SU(3)$ group of the standard model of quantum physics. The wave is a function of space and time with value in the Clifford algebra $Cl_{1,5}$. All features of the standard model, charge conjugation, color, left waves, Lagrangian formalism, are linked to the geometry of this extended space-time.
Liao, C; Quinlan, D; Panas, T
2009-10-06T23:59:59.000Z
General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will apply our programming model to more large scale applications. In particular, we plan to classify and formalize more high level abstractions and semantics which are relevant to high performance computing. We will also investigate better ways to allow language designers, library developers and programmers to communicate abstraction and semantics information with each other.
Modeling the Fracture of Ice Sheets on Parallel Computers
Waisman, Haim [Columbia University] [Columbia University; Tuminaro, Ray [Sandia National Labs] [Sandia National Labs
2013-10-10T23:59:59.000Z
The objective of this project was to investigate the complex fracture of ice and understand its role within larger ice sheet simulations and global climate change. This objective was achieved by developing novel physics based models for ice, novel numerical tools to enable the modeling of the physics and by collaboration with the ice community experts. At the present time, ice fracture is not explicitly considered within ice sheet models due in part to large computational costs associated with the accurate modeling of this complex phenomena. However, fracture not only plays an extremely important role in regional behavior but also influences ice dynamics over much larger zones in ways that are currently not well understood. To this end, our research findings through this project offers significant advancement to the field and closes a large gap of knowledge in understanding and modeling the fracture of ice sheets in the polar regions. Thus, we believe that our objective has been achieved and our research accomplishments are significant. This is corroborated through a set of published papers, posters and presentations at technical conferences in the field. In particular significant progress has been made in the mechanics of ice, fracture of ice sheets and ice shelves in polar regions and sophisticated numerical methods that enable the solution of the physics in an efficient way.
A 2007 Model Curriculum for a Liberal Arts Degree in Computer Science
Metaxas, Takis
A 2007 Model Curriculum for a Liberal Arts Degree in Computer Science Liberal Arts Computer Science Consortium February 25, 2007 In 1986, guidelines for a computer science major degree program offered in the context of the liberal arts were developed by the Liberal Arts Computer Science Consortium (LACS) [4
Computational fluid dynamic modeling of fluidized-bed polymerization reactors
Rokkam, Ram [Ames Laboratory
2012-11-02T23:59:59.000Z
Polyethylene is one of the most widely used plastics, and over 60 million tons are produced worldwide every year. Polyethylene is obtained by the catalytic polymerization of ethylene in gas and liquid phase reactors. The gas phase processes are more advantageous, and use fluidized-bed reactors for production of polyethylene. Since they operate so close to the melting point of the polymer, agglomeration is an operational concern in all slurry and gas polymerization processes. Electrostatics and hot spot formation are the main factors that contribute to agglomeration in gas-phase processes. Electrostatic charges in gas phase polymerization fluidized bed reactors are known to influence the bed hydrodynamics, particle elutriation, bubble size, bubble shape etc. Accumulation of electrostatic charges in the fluidized-bed can lead to operational issues. In this work a first-principles electrostatic model is developed and coupled with a multi-fluid computational fluid dynamic (CFD) model to understand the effect of electrostatics on the dynamics of a fluidized-bed. The multi-fluid CFD model for gas-particle flow is based on the kinetic theory of granular flows closures. The electrostatic model is developed based on a fixed, size-dependent charge for each type of particle (catalyst, polymer, polymer fines) phase. The combined CFD model is first verified using simple test cases, validated with experiments and applied to a pilot-scale polymerization fluidized-bed reactor. The CFD model reproduced qualitative trends in particle segregation and entrainment due to electrostatic charges observed in experiments. For the scale up of fluidized bed reactor, filtered models are developed and implemented on pilot scale reactor.
Compare Energy Use in Variable Refrigerant Flow Heat Pumps Field Demonstration and Computer Model
Sharma, Chandan; Raustad, Richard
2013-06-01T23:59:59.000Z
Variable Refrigerant Flow (VRF) heat pumps are often regarded as energy efficient air-conditioning systems which offer electricity savings as well as reduction in peak electric demand while providing improved individual zone setpoint control. One of the key advantages of VRF systems is minimal duct losses which provide significant reduction in energy use and duct space. However, there is limited data available to show their actual performance in the field. Since VRF systems are increasingly gaining market share in the US, it is highly desirable to have more actual field performance data of these systems. An effort was made in this direction to monitor VRF system performance over an extended period of time in a US national lab test facility. Due to increasing demand by the energy modeling community, an empirical model to simulate VRF systems was implemented in the building simulation program EnergyPlus. This paper presents the comparison of energy consumption as measured in the national lab and as predicted by the program. For increased accuracy in the comparison, a customized weather file was created by using measured outdoor temperature and relative humidity at the test facility. Other inputs to the model included building construction, VRF system model based on lab measured performance, occupancy of the building, lighting/plug loads, and thermostat set-points etc. Infiltration model inputs were adjusted in the beginning to tune the computer model and then subsequent field measurements were compared to the simulation results. Differences between the computer model results and actual field measurements are discussed. The computer generated VRF performance closely resembled the field measurements.
@ @ Computer Graphics, Volume 25, Number 4, July 1991 A Comprehensive Physical Model
Paris-Sud XI, UniversitĂ© de
@ @ Computer Graphics, Volume 25, Number 4, July 1991 A Comprehensive Physical Model for Light Graphics Cornell University Ithaca, NY 14853 Abstract A new general reflectance model for computer graphics and suitable for Computer Graphics appli- cations. Predicted reflectance distributions compare favorably
An Exact Modeling of Signal Statistics in Energy-integrating X-ray Computed Tomography
An Exact Modeling of Signal Statistics in Energy-integrating X-ray Computed Tomography Yi Fan1 used by modern computed tomography (CT) scanners and has been an interesting research topic 1. INTRODUCTION In x-ray computed tomography (CT), Poisson noise model has been widely used in noise
Michael V. Glazoff; Jeong-Whan Yoon
2013-08-01T23:59:59.000Z
In this report (prepared in collaboration with Prof. Jeong Whan Yoon, Deakin University, Melbourne, Australia) a research effort was made to develop a non associated flow rule for zirconium. Since Zr is a hexagonally close packed (hcp) material, it is impossible to describe its plastic response under arbitrary loading conditions with any associated flow rule (e.g. von Mises). As a result of strong tension compression asymmetry of the yield stress and anisotropy, zirconium displays plastic behavior that requires a more sophisticated approach. Consequently, a new general asymmetric yield function has been developed which accommodates mathematically the four directional anisotropies along 0 degrees, 45 degrees, 90 degrees, and biaxial, under tension and compression. Stress anisotropy has been completely decoupled from the r value by using non associated flow plasticity, where yield function and plastic potential have been treated separately to take care of stress and r value directionalities, respectively. This theoretical development has been verified using Zr alloys at room temperature as an example as these materials have very strong SD (Strength Differential) effect. The proposed yield function reasonably well models the evolution of yield surfaces for a zirconium clock rolled plate during in plane and through thickness compression. It has been found that this function can predict both tension and compression asymmetry mathematically without any numerical tolerance and shows the significant improvement compared to any reported functions. Finally, in the end of the report, a program of further research is outlined aimed at constructing tensorial relationships for the temperature and fluence dependent creep surfaces for Zr, Zircaloy 2, and Zircaloy 4.
Computational Biology and Bioinformatics 10.10 Models of substitution I: Basic Models A
Goldschmidt, Christina
& stochastic grammars 7.11 RNA structures 9.11 Finding signals in sequences 14.11 Challenges in genome of structure & movements & shapes & grammars 28.11 Integrative genomics: the omics DNA mRNA Protein Metabolite Phenotype 30.11 Integrative genomics: mapping #12;Computational Biology and Bioinformatics 10.10 Models
Efficient Computation of Info-Gap Robustness for Finite Element Models
Stull, Christopher J. [Los Alamos National Laboratory; Hemez, Francois M. [Los Alamos National Laboratory; Williams, Brian J. [Los Alamos National Laboratory
2012-07-05T23:59:59.000Z
A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.
Computational modeling and analysis of thermoelectric properties of nanoporous silicon
Li, H.; Yu, Y.; Li, G., E-mail: gli@clemson.edu [Department of Mechanical Engineering, Clemson University, Clemson, South Carolina 29634-0921 (United States)
2014-03-28T23:59:59.000Z
In this paper, thermoelectric properties of nanoporous silicon are modeled and studied by using a computational approach. The computational approach combines a quantum non-equilibrium Green's function (NEGF) coupled with the Poisson equation for electrical transport analysis, a phonon Boltzmann transport equation (BTE) for phonon thermal transport analysis and the Wiedemann-Franz law for calculating the electronic thermal conductivity. By solving the NEGF/Poisson equations self-consistently using a finite difference method, the electrical conductivity ? and Seebeck coefficient S of the material are numerically computed. The BTE is solved by using a finite volume method to obtain the phonon thermal conductivity k{sub p} and the Wiedemann-Franz law is used to obtain the electronic thermal conductivity k{sub e}. The figure of merit of nanoporous silicon is calculated by ZT=S{sup 2}?T/(k{sub p}+k{sub e}). The effects of doping density, porosity, temperature, and nanopore size on thermoelectric properties of nanoporous silicon are investigated. It is confirmed that nanoporous silicon has significantly higher thermoelectric energy conversion efficiency than its nonporous counterpart. Specifically, this study shows that, with a n-type doping density of 10{sup 20}?cm{sup –3}, a porosity of 36% and nanopore size of 3 nm ×?3?nm, the figure of merit ZT can reach 0.32 at 600?K. The results also show that the degradation of electrical conductivity of nanoporous Si due to the inclusion of nanopores is compensated by the large reduction in the phonon thermal conductivity and increase of absolute value of the Seebeck coefficient, resulting in a significantly improved ZT.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
assetsimagesicon-science.jpg Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of...
High-Performance Computer Modeling of the Cosmos-Iridium Collision
Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W
2009-08-28T23:59:59.000Z
This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.
Potts models with magnetic field: arithmetic, geometry, and computation
Shival Dasu; Matilde Marcolli
2014-12-26T23:59:59.000Z
We give a sheaf theoretic interpretation of Potts models with external magnetic field, in terms of constructible sheaves and their Euler characteristics. We show that the polynomial countability question for the hypersurfaces defined by the vanishing of the partition function is affected by changes in the magnetic field: elementary examples suffice to see non-polynomially countable cases that become polynomially countable after a perturbation of the magnetic field. The same recursive formula for the Grothendieck classes, under edge-doubling operations, holds as in the case without magnetic field, but the closed formulae for specific examples like banana graphs differ in the presence of magnetic field. We give examples of computation of the Euler characteristic with compact support, for the set of real zeros, and find a similar exponential growth with the size of the graph. This can be viewed as a measure of topological and algorithmic complexity. We also consider the computational complexity question for evaluations of the polynomial, and show both tractable and NP-hard examples, using dynamic programming.
A Polarizable QM/MM Explicit Solvent Model for Computational Electrochemistry in Water
Wang, Lee-Ping
We present a quantum mechanical/molecular mechanical (QM/MM) explicit solvent model for the computation of standard reduction potentials E[subscript 0]. The QM/MM model uses density functional theory (DFT) to model the ...
Computation Modeling and Assessment of Nanocoatings for Ultra Supercritical Boilers
J. Shingledecker; D. Gandy; N. Cheruvu; R. Wei; K. Chan
2011-06-21T23:59:59.000Z
Forced outages and boiler unavailability of coal-fired fossil plants is most often caused by fire-side corrosion of boiler waterwalls and tubing. Reliable coatings are required for Ultrasupercritical (USC) application to mitigate corrosion since these boilers will operate at a much higher temperatures and pressures than in supercritical (565 C {at} 24 MPa) boilers. Computational modeling efforts have been undertaken to design and assess potential Fe-Cr-Ni-Al systems to produce stable nanocrystalline coatings that form a protective, continuous scale of either Al{sub 2}O{sub 3} or Cr{sub 2}O{sub 3}. The computational modeling results identified a new series of Fe-25Cr-40Ni with or without 10 wt.% Al nanocrystalline coatings that maintain long-term stability by forming a diffusion barrier layer at the coating/substrate interface. The computational modeling predictions of microstructure, formation of continuous Al{sub 2}O{sub 3} scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. Advanced coatings, such as MCrAl (where M is Fe, Ni, or Co) nanocrystalline coatings, have been processed using different magnetron sputtering deposition techniques. Several coating trials were performed and among the processing methods evaluated, the DC pulsed magnetron sputtering technique produced the best quality coating with a minimum number of shallow defects and the results of multiple deposition trials showed that the process is repeatable. scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. The cyclic oxidation test results revealed that the nanocrystalline coatings offer better oxidation resistance, in terms of weight loss, localized oxidation, and formation of mixed oxides in the Al{sub 2}O{sub 3} scale, than widely used MCrAlY coatings. However, the ultra-fine grain structure in these coatings, consistent with the computational model predictions, resulted in accelerated Al diffusion from the coating into the substrate. An effective diffusion barrier interlayer coating was developed to prevent inward Al diffusion. The fire-side corrosion test results showed that the nanocrystalline coatings with a minimum number of defects have a great potential in providing corrosion protection. The coating tested in the most aggressive environment showed no evidence of coating spallation and/or corrosion attack after 1050 hours exposure. In contrast, evidence of coating spallation in isolated areas and corrosion attack of the base metal in the spalled areas were observed after 500 hours. These contrasting results after 500 and 1050 hours exposure suggest that the premature coating spallation in isolated areas may be related to the variation of defects in the coating between the samples. It is suspected that the cauliflower-type defects in the coating were presumably responsible for coating spallation in isolated areas. Thus, a defect free good quality coating is the key for the long-term durability of nanocrystalline coatings in corrosive environments. Thus, additional process optimization work is required to produce defect-free coatings prior to development of a coating application method for production parts.
MOMDIS: a Glauber model computer code for knockout reactions
C. A. Bertulani; A. Gade
2006-04-12T23:59:59.000Z
A computer program is described to calculate momentum distributions in stripping and diffraction dissociation reactions. A Glauber model is used with the scattering wavefunctions calculated in the eikonal approximation. The program is appropriate for knockout reactions at intermediate energy collisions (30 MeV $\\leq$ E$_{lab}/$nucleon $\\leq 2000$ MeV). It is particularly useful for reactions involving unstable nuclear beams, or exotic nuclei (e.g. neutron-rich nuclei), and studies of single-particle occupancy probabilities (spectroscopic factors) and other related physical observables. Such studies are an essential part of the scientific program of radioactive beam facilities, as in for instance the proposed RIA (Rare Isotope Accelerator) facility in the US.
Computational Mechanistic Studies of Acid-Catalyzed Lignin Model Dimers for Lignin Depolymerization
Kim, S.; Sturgeon, M. R.; Chmely, S. C.; Paton, R. S.; Beckham, G. T.
2013-01-01T23:59:59.000Z
Lignin is a heterogeneous alkyl-aromatic polymer that constitutes up to 30% of plant cell walls, and is used for water transport, structure, and defense. The highly irregular and heterogeneous structure of lignin presents a major obstacle in the development of strategies for its deconstruction and upgrading. Here we present mechanistic studies of the acid-catalyzed cleavage of lignin aryl-ether linkages, combining both experimental studies and quantum chemical calculations. Quantum mechanical calculations provide a detailed interpretation of reaction mechanisms including possible intermediates and transition states. Solvent effects on the hydrolysis reactions were incorporated through the use of a conductor-like polarizable continuum model (CPCM) and with cluster models including explicit water molecules in the first solvation shell. Reaction pathways were computed for four lignin model dimers including 2-phenoxy-phenylethanol (PPE), 1-(para-hydroxyphenyl)-2-phenoxy-ethanol (HPPE), 2-phenoxy-phenyl-1,3-propanediol (PPPD), and 1-(para-hydroxyphenyl)-2-phenoxy-1,3-propanediol (HPPPD). Lignin model dimers with a para-hydroxyphenyl ether (HPPE and HPPPD) show substantial differences in reactivity relative to the phenyl ether compound (PPE and PPPD) which have been clarified theoretically and experimentally. The significance of these results for acid deconstruction of lignin in plant cell walls will be discussed.
Li, Yaohang
computational density to a processor in order to increase CPU and other resources utilization rate. We use our
Ray tracing computations in the smoothed SEG/EAGE Salt Model
Cerveny, Vlastislav
Ray tracing computations in the smoothed SEG/EAGE Salt Model V#19;aclav Bucha Department to compute rays and synthetic seismograms of refracted and re ected P-waves in the smoothed SEG/EAGE Salt The original 3-D SEG/EAGE Salt Model (Aminzadeh et al. 1997) is very complex model and cannot be used for ray
Solar wind modeling: a computational tool for the classroom
Woolsey, Lauren N
2015-01-01T23:59:59.000Z
This article presents a Python model and library that can be used for student investigation of the application of fundamental physics on a specific problem: the role of magnetic field in solar wind acceleration. The paper begins with a short overview of the open questions in the study of the solar wind and how they relate to many commonly taught physics courses. The physics included in the model, The Efficient Modified Parker Equation Solving Tool (TEMPEST), is laid out for the reader. Results using TEMPEST on a magnetic field structure representative of the minimum phase of the Sun's activity cycle are presented and discussed. The paper suggests several ways to use TEMPEST in an educational environment and provides access to the current version of the code.
and NCAR in the development of a comprehensive, earth systems model. This model incorporates the most-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well. Our collaborators in climate research include the National Center
final report for Center for Programming Models for Scalable Parallel Computing
Johnson, Ralph E
2013-04-10T23:59:59.000Z
This is the final report of the work on parallel programming patterns that was part of the Center for Programming Models for Scalable Parallel Computing
COMPUTATIONAL AND EXPERIMENTAL MODELING OF SLURRY BUBBLE COLUMN REACTORS
Paul Lam; Dimitri Gidaspow
2001-08-01T23:59:59.000Z
This project is a collaborative effort between the University of Akron, Illinois Institute of Technology and two industries: UOP and Energy International. The tasks involve the development of transient two and three dimensional computer codes for slurry bubble column reactors, optimization, comparison to data, and measurement of input parameters, such as the viscosity and restitution coefficients. To understand turbulence, measurements were done in the riser with 530 micron glass beads using a PIV technique. This report summarizes the measurements and simulations completed as described in details in the attached paper, ''Computational and Experimental Modeling of Three-Phase Slurry-Bubble Column Reactor.'' The Particle Image Velocimetry method described elsewhere (Gidaspow and Huilin, 1996) was used to measure the axial and tangential velocities of the particles. This method was modified with the use of a rotating colored transparent disk. The velocity distributions obtained with this method shows that the distribution is close to Maxwellian. From the velocity measurements the normal and the shear stresses were computed. Also with the use of the CCD camera a technique was developed to measure the solids volume fraction. The granular temperature profile follows the solids volume fraction profile. As predicted by theory, the granular temperature is highest at the center of the tube. The normal stress in the direction of the flow is approximately 10 times larger than that in the tangential direction. The <{nu}{prime}{sub z}{nu}{prime}{sub z}> is lower at the center where the <{nu}{prime}{sub {theta}}{nu}{prime}{sub {theta}}> is higher at that point. The Reynolds shear stress was small, producing a restitution coefficient near unity. The normal Reynolds stress in the direction of flow is large due to the fact that it is produced by the large gradient of velocity in the direction of flow compared to the small gradient in the {theta} and r directions. The kinetic theory gives values of viscosity that agree with our previous measurements (Gidaspow, Wu and Mostofi, 1999). The values of viscosity obtained from pressure drop minus weight of bed measurements agree at the center of the tube.
Computational Modeling and Assessment Of Nanocoatings for Ultra Supercritical Boilers
David W. Gandy; John P. Shingledecker
2011-04-11T23:59:59.000Z
Forced outages and boiler unavailability in conventional coal-fired fossil power plants is most often caused by fireside corrosion of boiler waterwalls. Industry-wide, the rate of wall thickness corrosion wastage of fireside waterwalls in fossil-fired boilers has been of concern for many years. It is significant that the introduction of nitrogen oxide (NOx) emission controls with staged burners systems has increased reported waterwall wastage rates to as much as 120 mils (3 mm) per year. Moreover, the reducing environment produced by the low-NOx combustion process is the primary cause of accelerated corrosion rates of waterwall tubes made of carbon and low alloy steels. Improved coatings, such as the MCrAl nanocoatings evaluated here (where M is Fe, Ni, and Co), are needed to reduce/eliminate waterwall damage in subcritical, supercritical, and ultra-supercritical (USC) boilers. The first two tasks of this six-task project-jointly sponsored by EPRI and the U.S. Department of Energy (DE-FC26-07NT43096)-have focused on computational modeling of an advanced MCrAl nanocoating system and evaluation of two nanocrystalline (iron and nickel base) coatings, which will significantly improve the corrosion and erosion performance of tubing used in USC boilers. The computational model results showed that about 40 wt.% is required in Fe based nanocrystalline coatings for long-term durability, leading to a coating composition of Fe-25Cr-40Ni-10 wt.% Al. In addition, the long term thermal exposure test results further showed accelerated inward diffusion of Al from the nanocrystalline coatings into the substrate. In order to enhance the durability of these coatings, it is necessary to develop a diffusion barrier interlayer coating such TiN and/or AlN. The third task 'Process Advanced MCrAl Nanocoating Systems' of the six-task project jointly sponsored by the Electric Power Research Institute, EPRI and the U.S. Department of Energy (DE-FC26-07NT43096)- has focused on processing of advanced nanocrystalline coating systems and development of diffusion barrier interlayer coatings. Among the diffusion interlayer coatings evaluated, the TiN interlayer coating was found to be the optimum one. This report describes the research conducted under the Task 3 workscope.
Rakowski, Cynthia L.; Richmond, Marshall C.; Serkowski, John A.; Johnson, Gary E.
2005-03-10T23:59:59.000Z
Computational fluid dynamics (CFD) models were developed to support the siting and design of a behavioral guidance system (BGS) structure in The Dalles Dam (TDA) forebay on the Columbia River. The work was conducted by Pacific Northwest National Laboratory for the U.S. Army Corps of Engineers, Portland District (CENWP). The CFD results were an invaluable tool for the analysis, both from a Regional and Agency perspective (for the fish passage evaluation) and a CENWP perspective (supporting the BGS design and location). The new CFD model (TDA forebay model) included the latest bathymetry (surveyed in 1999) and a detailed representation of the engineered structures (spillway, powerhouse main, fish, and service units). The TDA forebay model was designed and developed in a way that future studies could easily modify or, to a large extent, reuse large portions of the existing mesh. This study resulted in these key findings: (1) The TDA forebay model matched well with field-measured velocity data. (2) The TDA forebay model matched observations made at the 1:80 general physical model of the TDA forebay. (3) During the course of this study, the methodology typically used by CENWP to contour topographic data was shown to be inaccurate when applied to widely-spaced transect data. Contouring methodologies need to be revisited--especially before such things as modifying the bathymetry in the 1:80 general physical model are undertaken. Future alignments can be evaluated with the model staying largely intact. The next round of analysis will need to address fish passage demands and navigation concerns. CFD models can be used to identify the most promising locations and to provide quantified metrics for biological, hydraulic, and navigation criteria. The most promising locations should then be further evaluated in the 1:80 general physical model.
CPT: An Energy-Efficiency Model for Multi-core Computer Systems
Shi, Weisong
CPT: An Energy-Efficiency Model for Multi-core Computer Systems Weisong Shi, Shinan Wang and Bing efficiency of computer systems. These techniques affect the energy efficiency across different layers metric that represents the energy efficiency of a computer system, for a specific configuration, given
A SOFTWARE ARCHITECTURE FOR DEVELOPMENTAL MODELING IN PLANTS: THE COMPUTABLE PLANT PROJECT
Mjolsness, Eric
dynamic objects and relationships; a C++ code generator to translate SBML into highly efficient simulationA SOFTWARE ARCHITECTURE FOR DEVELOPMENTAL MODELING IN PLANTS: THE COMPUTABLE PLANT PROJECT Victoria present the software architecture of the Computable Plant Project, a multidisciplinary computationally
Model-driven Configuration of Cloud Computing Auto-scaling Infrastructure
Schmidt, Douglas C.
Model-driven Configuration of Cloud Computing Auto-scaling Infrastructure Brian Dougherty1 and Jules White2 and Douglas C. Schmidt1 1 Vanderbilt University briand,schmidt@dre.vanderbilt.edu 2 Virginia Tech julesw@vt.edu Abstract. Cloud computing uses virtualized computational resources to allow
Pedram, Massoud
Trace-Based Analysis and Prediction of Cloud Computing User Behavior Using the Fractal Modeling and technology. In this paper, we investigate the characteristics of the cloud computing requests received the alpha- stable distribution. Keywords- cloud computing; alpha-stable distribution; fractional order
Demonstration of a computer model for residual radioactive material guidelines, RESRAD
Yu, C.; Yuan, Y.C.; Zielen, A.J.; Wallo, A. III (Argonne National Lab., IL (USA); USDOE, Washington, DC (USA))
1989-01-01T23:59:59.000Z
A computer model was developed to calculate residual radioactive material guidelines for the US Department of Energy (DOE). This model, called RESRAD, can be run on IBM or IBM-compatible microcomputer. Seven potential exposure pathways from contaminated soil are analyzed, including external radiation exposure and internal radiation exposure from inhalation and food digestion. The RESRAD code has been applied to several DOE sites to derive soil cleanup guidelines. The experience gained indicates that a comprehensive set of site-specific hydrogeologic and geochemical input parameters must be used for a realistic pathway analysis. The RESRAD code is a useful tool; it is easy to run and very user-friendly. 6 refs., 12 figs.
Computational Fluid Dynamics Modeling of the Operation of a Flame Ionization Sensor
Huckaby, E.D.; Chorpening, B.T.; Thornton, J.D.
2007-03-01T23:59:59.000Z
The sensors and controls research group at the United States Department of Energy (DOE) National Energy Technology Laboratory (NETL) is continuing to develop the Combustion Control and Diagnostics Sensor (CCADS) for gas turbine applications. CCADS uses the electrical conduction of the charged species generated during the combustion process to detect combustion instabilities and monitor equivalence ratio. As part of this effort, combustion models are being developed which include the interaction between the electric field and the transport of charged species. The primary combustion process is computed using a flame wrinkling model (Weller et. al. 1998) which is a component of the OpenFOAM toolkit (Jasak et. al. 2004). A sub-model for the transport of charged species is attached to this model. The formulation of the charged-species model similar that applied by Penderson and Brown (1993) for the simulation of laminar flames. The sub-model consists of an additional flux due to the electric field (drift flux) added to the equations for the charged species concentrations and the solution the electric potential from the resolved charge density. The subgrid interactions between the electric field and charged species transport have been neglected. Using the above procedure, numerical simulations are performed and the results compared with several recent CCADS experiments.
Paris-Sud XI, Université de
specially designed within the framework of this research. A computational heat transfer model is constructed. The developed mean model constitutes the basis of the computational stochastic heat transfer model that has been to the experimental ones. Keywords: computational heat transfer modeling, uncertainties, probabilistic modeling
CASTING DEFECT MODELING IN AN INTEGRATED COMPUTATIONAL MATERIALS ENGINEERING APPROACH
Sabau, Adrian S [ORNL
2015-01-01T23:59:59.000Z
To accelerate the introduction of new cast alloys, the simultaneous modeling and simulation of multiphysical phenomena needs to be considered in the design and optimization of mechanical properties of cast components. The required models related to casting defects, such as microporosity and hot tears, are reviewed. Three aluminum alloys are considered A356, 356 and 319. The data on calculated solidification shrinkage is presented and its effects on microporosity levels discussed. Examples are given for predicting microporosity defects and microstructure distribution for a plate casting. Models to predict fatigue life and yield stress are briefly highlighted here for the sake of completion and to illustrate how the length scales of the microstructure features as well as porosity defects are taken into account for modeling the mechanical properties. Thus, the data on casting defects, including microstructure features, is crucial for evaluating the final performance-related properties of the component. ACKNOWLEDGEMENTS This work was performed under a Cooperative Research and Development Agreement (CRADA) with the Nemak Inc., and Chrysler Co. for the project "High Performance Cast Aluminum Alloys for Next Generation Passenger Vehicle Engines. The author would also like to thank Amit Shyam for reviewing the paper and Andres Rodriguez of Nemak Inc. Research sponsored by the U. S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office, as part of the Propulsion Materials Program under contract DE-AC05-00OR22725 with UT-Battelle, LLC. Part of this research was conducted through the Oak Ridge National Laboratory's High Temperature Materials Laboratory User Program, which is sponsored by the U. S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Program.
A Benchmark of Computational Models of Saliency to Predict Human Fixations
Judd, Tilke
2012-01-13T23:59:59.000Z
Many computational models of visual attention have been created from a wide variety of different approaches to predict where people look in images. Each model is usually introduced by demonstrating performances on new ...
Modeling of the Aging Viscoelastic Properties of Cement Paste Using Computational Methods
Li, Xiaodan
2012-07-16T23:59:59.000Z
computational model using finite element method to predict the viscoelastic behavior of cement paste, and using this model, virtual tests can be carried out to improve understanding of the mechanisms of viscoelastic behavior. The primary finding from...
DEVELOPMENT OF AN INTERACTIVE COMPUTER SIMULATION MODEL FOR DESIGNING
processes and high product quality standards required. In particular, computer simulation of industrial equilibrium and transient warp analysis, and displaying the results graphically. Four example panels were
Development of Reduced Computational Models for Alfvn Instabilities in
Ito, Atsushi
improvements in analysis/high performance computing #12;4 Managed by UT-Battelle for the U.S. Department
8/30/2001 Parallel Programming -Fall 2001 1 Models of Parallel Computation
Browne, James C.
8/30/2001 Parallel Programming - Fall 2001 1 Models of Parallel Computation Philosophy Parallel of parallel programming. #12;8/30/2001 Parallel Programming - Fall 2001 2 Models of Parallel Computation will discuss parallelism from the viewpoint of programming but with connections to other domains. #12;8/30/2001
Dessouky, Maged
A Hierarchical Task Model for Dispatching in Computer- Assisted Demand-Responsive Paratransit Model for Dispatching in Computer-Assisted Demand-Responsive Paratransit Operation ABSTRACT, Dispatch Training #12;1 INTRODUCTION Demand-responsive paratransit service is on the rise. For example
Lecture Notes in Computer Science 1 A Connectionist-Symbolic Approach to Modeling Agent
DeMara, Ronald F.
, Ronald F. DeMara1 1 Intelligent Systems Laboratory School of Electrical Engineering and Computer Science. CGFs are computer-controlled behavioral models of combatants used to serve as opponents against whom promise for providing power- ful learning models" in a recent National Research Council Report [5]. Also
Medical Nuclear Supply Chain Design: A Tractable Network Model and Computational
Nagurney, Anna
Medical Nuclear Supply Chain Design: A Tractable Network Model and Computational Approach Anna Chain Challenges The Medical Nuclear Supply Chain Network Design Model The Computational Approach Medical Nuclear Supply Chain Design #12;This presentation is based on the paper, "Medical Nuclear Supply
Ferroelectrics 342:73-82, 2006 Computational Modeling of Ferromagnetic Shape Memory Thin Films
Luskin, Mitchell
1 Ferroelectrics 342:73-82, 2006 Computational Modeling of Ferromagnetic Shape Memory Thin Films J films of Ni2MnGa ferromagnetic shape memory alloys in response to the application of a magnetic field: ferromagnetic, shape memory, active thin film, computational modeling INTRODUCTION The Ni2MnGa ferromagnetic
Computer simulation study of liquid CH2F2 with a new effective pair potential model
Mezei, Mihaly
to reproduce the thermodynamic internal energy, density, heat capacity, vapor-liquid equilibrium and structuralComputer simulation study of liquid CH2F2 with a new effective pair potential model Pa potential model is proposed for computer simulations of liquid methylene fluoride and used in Monte Carlo
Energy Aware Algorithm Design via Probabilistic Computing: From Algorithms and Models to Moore's Law
Palem, Krishna V.
Energy Aware Algorithm Design via Probabilistic Computing: From Algorithms and Models to Moore opportunities for being energy-aware, the most fundamental limits are truly rooted in the physics of energy of models of computing for energy-aware al- gorithm design and analysis, culminating in establishing
Protocols for BoundedConcurrent Secure TwoParty Computation in the Plain Model
Lindell, Yehuda
Protocols for BoundedÂConcurrent Secure TwoÂParty Computation in the Plain Model Yehuda Lindell # Department of Computer Science BarÂIlan University Ramat Gan, 52900, Israel lindell@cs.biu.ac.il September 26Âcomposition, in the plain model (where the only setup assumption made is that the parties communicate via authenticated
Nishino, Takafumi
2012-01-01T23:59:59.000Z
Modelling of turbine blade-induced turbulence (BIT) is discussed within the framework of three-dimensional Reynolds-averaged Navier-Stokes (RANS) actuator disk computations. We first propose a generic (baseline) BIT model, which is applied only to the actuator disk surface, does not include any model coefficients (other than those used in the original RANS turbulence model) and is expected to be valid in the limiting case where BIT is fully isotropic and in energy equilibrium. The baseline model is then combined with correction functions applied to the region behind the disk to account for the effect of rotor tip vortices causing a mismatch of Reynolds shear stress between short- and long-time averaged flow fields. Results are compared with wake measurements of a two-bladed wind turbine model of Medici and Alfredsson [Wind Energy, Vol. 9, 2006, pp. 219-236] to demonstrate the capability of the new model.
Recommendations for computer modeling codes to support the UMTRA groundwater restoration project
Tucker, M.D. [Sandia National Labs., Albuquerque, NM (United States); Khan, M.A. [IT Corp., Albuquerque, NM (United States)
1996-04-01T23:59:59.000Z
The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended.
Model computations of blue stragglers and W UMa-type stars in globular clusters
Stepien, Kazimierz
2015-01-01T23:59:59.000Z
It was recently demonstrated that contact binaries occur in globular clusters (GCs) only immediately below turn-off point and in the region of blue straggler stars (BSs). In addition, observations indicate that at least a significant fraction of BSs in these clusters was formed by the binary mass-transfer mechanism. The aim of our present investigation is to obtain and analyze a set of evolutionary models of cool, close detached binaries with a low metal abundance, which are characteristic of GC. We computed the evolution of 975 models of initially detached, cool close binaries with different initial parameters. The models include mass exchange between components as well as mass and angular momentum loss due to the magnetized winds for very low-metallicity binaries with Z = 0.001. The models are interpreted in the context of existing data on contact binary and blue straggler members of GCs. The model parameters agree well with the observed positions of the GC contact binaries in the Hertzsprung-Russell diagra...
Low-level fluoride trapping studies experimental work for computer modeling program
Russell, R.G.
1988-11-21T23:59:59.000Z
The material presented in this report involved experimental work performed to assist in determining the constants for a computer modeling program being developed by Production Engineering for use in trap design. Included in this study is bed distribution studies to define uranium loading on alumina (Al/sub 2/O/sub 3/) and sodium fluoride (NaF) with respect to bed zones. A limited amount of work was done on uranium penetration into NaF pellets. Only the experimental work is reported here; Production Engineering will use this data to develop constants for the computer model. Some of the significant conclusions are: NaF has more capacity to load UF/sub 6/, but Al/sub 2/O/sub 3/ distributes the load more equally; velocity, system pressure, and operating temperature influence uranium loading; and in comparative tests NaF had a loading of 25%, while Al/sub 2/O/sub 3/ was 13%. 2 refs., 10 figs., 5 tabs.
Max Morris
2001-04-01T23:59:59.000Z
Recent advances in sensor technology and engineering have made it possible to assemble many related sensors in a common array, often of small physical size. Sensor arrays may report an entire vector of measured values in each data collection cycle, typically one value per sensor per sampling time. The larger quantities of data provided by larger arrays certainly contain more information, however in some cases experience suggests that dramatic increases in array size do not always lead to corresponding improvements in the practical value of the data. The work leading to this report was motivated by the need to develop computational planning tools to approximate the relative effectiveness of arrays of different size (or scale) in a wide variety of contexts. The basis of the work is a statistical model of a generic sensor array. It includes features representing measurement error, both common to all sensors and independent from sensor to sensor, and the stochastic relationships between the quantities to be measured by the sensors. The model can be used to assess the effectiveness of hypothetical arrays in classifying objects or events from two classes. A computer program is presented for evaluating the misclassification rates which can be expected when arrays are calibrated using a given number of training samples, or the number of training samples required to attain a given level of classification accuracy. The program is also available via email from the first author for a limited time.
Sound-insulation layer modelling in car computational vibroacoustics in the medium-frequency range
Boyer, Edmond
Sound-insulation layer modelling in car computational vibroacoustics in the medium-frequency range In a previous article, a simplified low- and medium-frequency model for un- certain automotive sound-insulation. In this paper, the insulation simplified model is implemented in an in- dustrial stochastic vibroacoustic model
Bürger, Raimund
-dimensional model of sedimentation of suspensions of small solid particles dispersed in a viscous fluid. This model accepted spatially one-dimensional sedimentation model [35] gives rise to one scalar, nonlinear hyperbolicINTERNATIONAL JOURNAL OF c 2011 Institute for Scientific NUMERICAL ANALYSIS AND MODELING Computing
Bürger, Raimund
-dimensional model of sedimentation of suspensions of small solid particles dispersed in a viscous fluid. This model accepted spatially one-dimensional sedimentation model [35] gives rise to one scalar, nonlinear hyperbolicINTERNATIONAL JOURNAL OF c 2012 Institute for Scientific NUMERICAL ANALYSIS AND MODELING Computing
Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL
2012-08-09T23:59:59.000Z
Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.
Wu, K.T.; Li, B.; Payne, R.
1992-06-01T23:59:59.000Z
This manual presents and describes a package of computer models uniquely developed for boiler thermal performance and emissions evaluations by the Energy and Environmental Research Corporation. The model package permits boiler heat transfer, fuels combustion, and pollutant emissions predictions related to a number of practical boiler operations such as fuel-switching, fuels co-firing, and reburning NO{sub x} reductions. The models are adaptable to most boiler/combustor designs and can handle burner fuels in solid, liquid, gaseous, and slurried forms. The models are also capable of performing predictions for combustion applications involving gaseous-fuel reburning, and co-firing of solid/gas, liquid/gas, gas/gas, slurry/gas fuels. The model package is conveniently named as BPACK (Boiler Package) and consists of six computer codes, of which three of them are main computational codes and the other three are input codes. The three main codes are: (a) a two-dimensional furnace heat-transfer and combustion code: (b) a detailed chemical-kinetics code; and (c) a boiler convective passage code. This user`s manual presents the computer model package in two volumes. Volume 1 describes in detail a number of topics which are of general users` interest, including the physical and chemical basis of the models, a complete description of the model applicability, options, input/output, and the default inputs. Volume 2 contains a detailed record of the worked examples to assist users in applying the models, and to illustrate the versatility of the codes.
Computational Modeling for the American Chemical Society | GE...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
systems that exist on the line between computers and machines. He is currently developing autonomous robotic systems that will act as a bridge between the physical world and the...
Computational models in the debate over language learnability
Paris-Sud XI, UniversitĂ© de
, CH- 1015 Lausanne, Switzerland 2 Sony Computer Science Laboratory Paris 6 rue Amyot, 75005 Paris, HI 96822, USA frederic.kaplan@epfl.ch, oudeyer@csl.sony.fr, bergen@hawaii.edu May 17, 2007 Abstract
Applications of Computer Modelling to Fire Safety Design
Torero, Jose L; Steinhaus, Thomas
Tools in support of fire safety engineering design have proliferated in the last few years due to the increased performance of computers. These tools are currently being used in a generalized manner in areas such as egress, ...
analytic computer model: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
ecological regimes. The time transient allows one to defined time scales for the system evolution, which can be relevant for the study of tumor growth by theoretical or computer...
Cloud computing adoption model for governments and large enterprises
Trivedi, Hrishikesh
2013-01-01T23:59:59.000Z
Cloud Computing has held organizations across the globe spell bound with its promise. As it moves from being a buzz word and hype into adoption, organizations are faced with question of how to best adopt cloud. Existing ...
JACKSON VL
2011-08-31T23:59:59.000Z
The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.
Shapiro, C.S.
1984-08-01T23:59:59.000Z
The GLODEP2 computer code was utilized to determine biological impact to humans on a global scale using up-to-date estimates of biological risk. These risk factors use varied biological damage models for assessing effects. All the doses reported are the unsheltered, unweathered, smooth terrain, external gamma dose. We assume the unperturbed atmosphere in determining injection and deposition. Effects due to ''nuclear winter'' may invalidate this assumption. The calculations also include scenarios that attempt to assess the impact of the changing nature of the nuclear stockpile. In particular, the shift from larger to smaller yield nuclear devices significantly changes the injection pattern into the atmosphere, and hence significantly affects the radiation doses that ensue. We have also looked at injections into the equatorial atmosphere. In total, we report here the results for 8 scenarios. 10 refs., 6 figs., 11 tabs.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power Administration would like submit theInnovationComputationalEnergy Computers,Computing
Computer Modeling VRF Heat Pumps in Commercial Buildings using EnergyPlus
Raustad, Richard
2013-06-01T23:59:59.000Z
Variable Refrigerant Flow (VRF) heat pumps are increasingly used in commercial buildings in the United States. Monitored energy use of field installations have shown, in some cases, savings exceeding 30% compared to conventional heating, ventilating, and air-conditioning (HVAC) systems. A simulation study was conducted to identify the installation or operational characteristics that lead to energy savings for VRF systems. The study used the Department of Energy EnergyPlus? building simulation software and four reference building models. Computer simulations were performed in eight U.S. climate zones. The baseline reference HVAC system incorporated packaged single-zone direct-expansion cooling with gas heating (PSZ-AC) or variable-air-volume systems (VAV with reheat). An alternate baseline HVAC system using a heat pump (PSZ-HP) was included for some buildings to directly compare gas and electric heating results. These baseline systems were compared to a VRF heat pump model to identify differences in energy use. VRF systems combine multiple indoor units with one or more outdoor unit(s). These systems move refrigerant between the outdoor and indoor units which eliminates the need for duct work in most cases. Since many applications install duct work in unconditioned spaces, this leads to installation differences between VRF systems and conventional HVAC systems. To characterize installation differences, a duct heat gain model was included to identify the energy impacts of installing ducts in unconditioned spaces. The configuration of variable refrigerant flow heat pumps will ultimately eliminate or significantly reduce energy use due to duct heat transfer. Fan energy is also studied to identify savings associated with non-ducted VRF terminal units. VRF systems incorporate a variable-speed compressor which may lead to operational differences compared to single-speed compression systems. To characterize operational differences, the computer model performance curves used to simulate cooling operation are also evaluated. The information in this paper is intended to provide a relative difference in system energy use and compare various installation practices that can impact performance. Comparative results of VRF versus conventional HVAC systems include energy use differences due to duct location, differences in fan energy when ducts are eliminated, and differences associated with electric versus fossil fuel type heating systems.
Center for Programming Models for Scalable Parallel Computing: Future Programming Models
Gao, Guang, R.
2008-07-24T23:59:59.000Z
The mission of the pmodel center project is to develop software technology to support scalable parallel programming models for terascale systems. The goal of the specific UD subproject is in the context developing an efficient and robust methodology and tools for HPC programming. More specifically, the focus is on developing new programming models which facilitate programmers in porting their application onto parallel high performance computing systems. During the course of the research in the past 5 years, the landscape of microprocessor chip architecture has witnessed a fundamental change – the emergence of multi-core/many-core chip architecture appear to become the mainstream technology and will have a major impact to for future generation parallel machines. The programming model for shared-address space machines is becoming critical to such multi-core architectures. Our research highlight is the in-depth study of proposed fine-grain parallelism/multithreading support on such future generation multi-core architectures. Our research has demonstrated the significant impact such fine-grain multithreading model can have on the productivity of parallel programming models and their efficient implementation.
Ahmadi, G.
1993-12-31T23:59:59.000Z
The objective of this project is to develop an accurate model describing turbulent flows of coal slurries, rapid flows of granular coal-air mixtures, and turbulent coal combustion processes. The other main objective is to develop a computer code incorporating the new model. Experimental verification of the foundation of the model is also included in the study. In this report the thermodynamically consistent, rate dependent model for turbulent two-phase flows analysis was used and the phasic fluctuation energy dissipation rates are evaluated. Further progress on the application of the kinetic model for rapid flows of granular materials including the frictional energy losses were made. The velocity, the fluctuation energy and the solid volume fraction profiles for granular flows down a vertical channel were obtained. The results were compared with the molecular dynamic simulations of Savage and good agreement was obtained. The computational model was used and the rapid granular flows around a rectangular block in a channel were analyzed. The effect of bumpy wall on flow of granular materials was analyzed. The special case of Couette flow was studied. The preliminary results obtained is quite encouraging. Further progress was made in the experimental study of mono-layer simple shear flow device. Preliminary data concerning the shearing of 12 mm multi-color glass particles are obtained.
RISG-M-2482 COMPUTER MODELLING OF RADIOACTIVE SOURCE TERMS
/INTOR workshops. INIS descriptors: M CODES; MATHEMATICAL MODELS; MONTE CARLO METHOD; NEUTRON TRANSPORT; TOKAMAK
Tesfatsion, Leigh
Bounded Computing Capacity ·· Explicit SpaceExplicit Space ·· Local InteractionsLocal Interactions ·· Non explanatory notion.notion. #12;4 SugarscapeSugarscape ·· Events unfold on a landscape of renewableEvents
A.24-1 A.24 ENHANCING THE CAPABILITY OF COMPUTATIONAL EARTH SYSTEM MODELS AND NASA DATA) computational support of Earth system modeling. #12;A.24-2 2.1 Acceleration of Operational Use of Research Data
Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O. [Sandia National Labs., Albuquerque, NM (United States)
1993-10-01T23:59:59.000Z
The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.
Computer modeling reveals how surprisingly potent hepatitis C drug works
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power Administration would like submit theInnovationComputational Biology2IfComputer
Computer-Aided Construction of Combustion Chemistry Models
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power Administration would like submit theInnovationComputational Biology2IfComputerJungle
Internship Parallel Computer Evaluation Parallelization of a Lagrangian Particle Diffusion Model
use case is a nuclear accident like a core meltdown at a atomic power plant, where atomic radiation emits in the air. The Lagrangian model can predict how the nuclear cloud spreads under different that will be computed. Particle: One single molecule floating in the wind field. Compute unit: One unit that runs
Illinois at Chicago, University of
2007-01-01T23:59:59.000Z
Resources, Conservation and Recycling 51 (2007) 847869 Modeling obsolete computer stock under and recycling systems using GIS, and demonstrate the potential economic benefits from diverting electronic buildings. © 2007 Elsevier B.V. All rights reserved. Keywords: Computer recycling; Product inventory
Probabilistic Model Checking and PowerAware Computing Marta Kwiatkowska Gethin Norman David Parker
Oxford, University of
operating system control, can be switched either on and off or between several power states of varying powerProbabilistic Model Checking and PowerAware Computing #3; Marta Kwiatkowska Gethin Norman Davidaware computing aims either to maximise the per formance of a system under certain constraints on its power
Probabilistic Model Checking and Power-Aware Computing Marta Kwiatkowska Gethin Norman David Parker
Oxford, University of
operating system control, can be switched either on and off or between several power states of varying powerProbabilistic Model Checking and Power-Aware Computing Marta Kwiatkowska Gethin Norman David Parker-aware computing aims either to maximise the per- formance of a system under certain constraints on its power
A Network Model and Computational Approach Mo Supply Chain for Nuclear Medicine
Nagurney, Anna
A Network Model and Computational Approach for the 99 Mo Supply Chain for Nuclear Medicine Ladimer S. Nagurney1 and Anna Nagurney2 1Department of Electrical and Computer Engineering University University of Massachusetts - Amherst, Massachusetts 01003 Fall 2011 Joint Meeting Of The New England
Computer representation of the model covariance function resulting from travel-time tomography
Cerveny, Vlastislav
Computer representation of the model covariance function resulting from travel-time tomography Lud a supplement to the paper by Klime#20;s (2002b) on the stochastic travel{time tomography. It contains brief covariance function is a function of 6 coordinates with pro- nounced singularities. The computer
"Creating computational models of biological systems to better combat
Acton, Scott
that could be used for biofuel and other metabolic engineering applications. Â· Performed high of Microbial Pathogens Infectious disease is the leading cause of death worldwide. While genomics has had system in biofuel and nutraceutical production. With the aid of computational techniques, we can predict
Discovering Novel Cancer Therapies: A Computational Modeling and Search Approach
Flann, Nicholas
of new blood vessels (angiogenesis) is an important approach in cancer treatment. However, the complexity-based approach for the discovery of novel potential cancer treatments using a high fidelity simulation in cancer treatment [2]. This paper introduces a computational approach to search for novel intervention
Learning Partially Observable Deterministic Action Models Computer Science Department
Amir, Eyal
. For example, the overall time for learning STRIPS actions' effects is O(T Â· n). For other cases the update per- imate the representation with a k-CNF formula, yielding an overall time of O(T Â· nk ) for the entire, and games. Other applications, such as robotics, human-computer interfaces, and program and
Learning Partially Observable Deterministic Action Models Computer Science Department
Amir, Eyal
. For example, the overall time for learning STRIPS actions' effects is O(T Â· n). For other cases the update per approxÂ imate the representation with a kÂCNF formula, yielding an overall time of O(T Â· n k, virtual worlds, and games. Other applications, such as robotics, humanÂcomputer interfaces, and progr
A Computational Model to Connect Gestalt Perception and Natural Language
Roy, Deb
by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deb K. Roy Assistant Professor of Media Arts and Sciences Thesis Supervisor Accepted by Supervisor: Deb K. Roy Title: Assistant Professor of Media Arts and Sciences 3 #12;4 #12;A Computational by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alex P. Pentland Toshiba Professor of Media Arts and Sciences Massachusetts Institute of Technology
Modelling Photochemical Pollution using Parallel and Distributed Computing Platforms
Abramson, David
of photochemical air pollution (smog) in industrialised cities. However, computational hardware demands can that have been used as part of an air pollution study being conducted in Melbourne, Australia. We also necessary to perform real air pollution studies. The system is used as part of the Melbourne Airshed study
Michael V. Glazoff; Piyush Sabharwall; Akira Tokuhiro
2014-09-01T23:59:59.000Z
An evaluation of thermodynamic aspects of hot corrosion of the superalloys Haynes 242 and HastelloyTM N in the eutectic mixtures of KF and ZrF4 is carried out for development of Advanced High Temperature Reactor (AHTR). This work models the behavior of several superalloys, potential candidates for the AHTR, using computational thermodynamics tool (ThermoCalc), leading to the development of thermodynamic description of the molten salt eutectic mixtures, and on that basis, mechanistic prediction of hot corrosion. The results from these studies indicated that the principal mechanism of hot corrosion was associated with chromium leaching for all of the superalloys described above. However, HastelloyTM N displayed the best hot corrosion performance. This was not surprising given it was developed originally to withstand the harsh conditions of molten salt environment. However, the results obtained in this study provided confidence in the employed methods of computational thermodynamics and could be further used for future alloy design efforts. Finally, several potential solutions to mitigate hot corrosion were proposed for further exploration, including coating development and controlled scaling of intermediate compounds in the KF-ZrF4 system.
Emergency Response Equipment and Related Training: Airborne Radiological Computer System (Model II)
David P. Colton
2007-02-28T23:59:59.000Z
The materials included in the Airborne Radiological Computer System, Model-II (ARCS-II) were assembled with several considerations in mind. First, the system was designed to measure and record the airborne gamma radiation levels and the corresponding latitude and longitude coordinates, and to provide a first overview look of the extent and severity of an accident's impact. Second, the portable system had to be light enough and durable enough that it could be mounted in an aircraft, ground vehicle, or watercraft. Third, the system must control the collection and storage of the data, as well as provide a real-time display of the data collection results to the operator. The notebook computer and color graphics printer components of the system would only be used for analyzing and plotting the data. In essence, the provided equipment is composed of an acquisition system and an analysis system. The data can be transferred from the acquisition system to the analysis system at the end of the data collection or at some other agreeable time.
Computational Fluid Dynamics Modeling of a Lithium/Thionyl Chloride Battery with Electrolyte Flow
Wang, Chao-Yang
Computational Fluid Dynamics Modeling of a Lithium/Thionyl Chloride Battery with Electrolyte Flow W-dimensional model is developed to simulate discharge of a primary lithium/thionyl chloride battery. The model to the first task with important examples of lead-acid,1-3 nickel-metal hydride,4-8 and lithium-based batteries
Model Discovery for Energy-Aware Computing Systems: An Experimental Evaluation
Stoller, Scott
experimentally. The process of model discovery for energy- aware systems, in advance of controller design. Such models are also prerequisites for the appli- cation of control theory to energy-aware systems. We.e., the computing system to be controlled) using system identification; (2) use the plant model to design
A computational contact model for nanoscale rubber adhesion Roger A. Sauer
A computational contact model for nanoscale rubber adhesion Roger A. Sauer Institute for Continuum Mechanics, Leibniz UniversitÂ¨at Hannover, Germany published in Constitutive Models for Rubber VI, G mechanical contact model which is capable of describing and simulating rubber adhesion at the nanometer scale
Comments on the use of computer models for merger analysis in the electricity industry
California at Berkeley. University of
that the commission is considering, electricity market models, production cost/optimal power flow models, and hybridsComments on the use of computer models for merger analysis in the electricity industry FERC Docket for market power in electricity markets. These analyses have yielded several insights about the application
Ortiz, Michael
Computational modeling of damage evolution in unidirectional fiber reinforced ceramic matrix mechanical re- sponse of a ceramic matrix composite is simulated by a numerical model for a ®ber-matrix unit evolution in brittle matrix composites was developed. This modeling is based on an axisymmetric unit cell
Protein translocation without specific quality control in a computational model of the Tat system
Chitra R. Nayak; Aidan I. Brown; Andrew D. Rutenberg
2014-08-20T23:59:59.000Z
The twin-arginine translocation (Tat) system transports folded proteins of various sizes across both bacterial and plant thylakoid membranes. The membrane-associated TatA protein is an essential component of the Tat translocon, and a broad distribution of different sized TatA-clusters is observed in bacterial membranes. We assume that the size dynamics of TatA clusters are affected by substrate binding, unbinding, and translocation to associated TatBC clusters, where clusters with bound translocation substrates favour growth and those without associated substrates favour shrinkage. With a stochastic model of substrate binding and cluster dynamics, we numerically determine the TatA cluster size distribution. We include a proportion of targeted but non-translocatable (NT) substrates, with the simplifying hypothesis that the substrate translocatability does not directly affect cluster dynamical rate constants or substrate binding or unbinding rates. This amounts to a translocation model without specific quality control. Nevertheless, NT substrates will remain associated with TatA clusters until unbound and so will affect cluster sizes and translocation rates. We find that the number of larger TatA clusters depends on the NT fraction $f$. The translocation rate can be optimized by tuning the rate of spontaneous substrate unbinding, $\\Gamma_U$. We present an analytically solvable three-state model of substrate translocation without cluster size dynamics that follows our computed translocation rates, and that is consistent with {\\em in vitro} Tat-translocation data in the presence of NT substrates.
Gedeon, Tomas
, from those appearing in physiology and ecology to Earth systems modeling, often experience critical
Computationally Efficient Regularized Inversion for Highly Parameterized MODFLOW Models
Barrash, Warren
. INTRODUCTION The inverse problem in groundwater modeling is generally ill-posed and non-unique. The typical geological heterogeneity has not been possible in common groundwater modeling practice. The principal reasons-Marquardt methods, and (3) lack of experience within the groundwater modeling community with regularized inversion
de la Hoz del Hoyo, Diego
2014-07-01T23:59:59.000Z
This thesis examines how computer modelling matters for policy-making by looking at two case studies of European fisheries management. Based on documentary analysis and ethnographic interviews and observations, the main ...
Karplus, V.J.
A well-known challenge in computable general equilibrium (CGE) models is to maintain correspondence between the forecasted economic and physical quantities over time. Maintaining such a correspondence is necessary to ...
Seagraves, Andrew Nathan
2010-01-01T23:59:59.000Z
In this thesis a new parallel computational method is proposed for modeling threedimensional dynamic fracture of brittle solids. The method is based on a combination of the discontinuous Galerkin (DG) formulation of the ...
Huang, Yongxin
2010-01-16T23:59:59.000Z
using MPI. The results show the cluster system can simultaneously support up to 32 processes for MPI program with high performance of interprocess communication. The parallel computations of phase field model of magnetic materials implemented by a MPI...
Pruess, K.
2011-05-15T23:59:59.000Z
Storage of CO{sub 2} in saline aquifers is intended to be at supercritical pressure and temperature conditions, but CO{sub 2} leaking from a geologic storage reservoir and migrating toward the land surface (through faults, fractures, or improperly abandoned wells) would reach subcritical conditions at depths shallower than 500-750 m. At these and shallower depths, subcritical CO{sub 2} can form two-phase mixtures of liquid and gaseous CO{sub 2}, with significant latent heat effects during boiling and condensation. Additional strongly non-isothermal effects can arise from decompression of gas-like subcritical CO{sub 2}, the so-called Joule-Thomson effect. Integrated modeling of CO{sub 2} storage and leakage requires the ability to model non-isothermal flows of brine and CO{sub 2} at conditions that range from supercritical to subcritical, including three-phase flow of aqueous phase, and both liquid and gaseous CO{sub 2}. In this paper, we describe and demonstrate comprehensive simulation capabilities that can cope with all possible phase conditions in brine-CO{sub 2} systems. Our model formulation includes: (1) an accurate description of thermophysical properties of aqueous and CO{sub 2}-rich phases as functions of temperature, pressure, salinity and CO{sub 2} content, including the mutual dissolution of CO{sub 2} and H{sub 2}O; (2) transitions between super- and subcritical conditions, including phase change between liquid and gaseous CO{sub 2}; (3) one-, two-, and three-phase flow of brine-CO{sub 2} mixtures, including heat flow; (4) non-isothermal effects associated with phase change, mutual dissolution of CO{sub 2} and water, and (de-) compression effects; and (5) the effects of dissolved NaCl, and the possibility of precipitating solid halite, with associated porosity and permeability change. Applications to specific leakage scenarios demonstrate that the peculiar thermophysical properties of CO{sub 2} provide a potential for positive as well as negative feedbacks on leakage rates, with a combination of self-enhancing and self-limiting effects. Lower viscosity and density of CO{sub 2} as compared to aqueous fluids provides a potential for self-enhancing effects during leakage, while strong cooling effects from liquid CO{sub 2} boiling into gas, and from expansion of gas rising towards the land surface, act to self-limit discharges. Strong interference between fluid phases under three-phase conditions (aqueous - liquid CO{sub 2} - gaseous CO{sub 2}) also tends to reduce CO{sub 2} fluxes. Feedback on different space and time scales can induce non-monotonic behavior of CO{sub 2} flow rates.
EMSL - Molecular Science Computing
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
computing Resources and Techniques Molecular Science Computing - Sophisticated and integrated computational capabilities, including scientific consultants, software, Cascade...
Held, Christian [Hochschul Institute Neckarsulm, Gottlieb-Daimler-Strasse 40, 74172 Neckarsulm (Germany); Liewald, Mathias; Schleich, Ralf [Institute for Metal Forming Technology, Universitaet Stuttgart, Stuttgart (Germany); Sindel, Manfred [AUDI AG, Neckarsulm (Germany)
2010-06-15T23:59:59.000Z
The use of lightweight materials offers substantial strength and weight advantages in car body design. Unfortunately such kinds of sheet material are more susceptible to wrinkling, spring back and fracture during press shop operations. For characterization of capability of sheet material dedicated to deep drawing processes in the automotive industry, mainly Forming Limit Diagrams (FLD) are used. However, new investigations at the Institute for Metal Forming Technology have shown that High Strength Steel Sheet Material and Aluminum Alloys show increased formability in case of bending loads are superposed to stretching loads. Likewise, by superposing shearing on in plane uniaxial or biaxial tension formability changes because of materials crystallographic texture. Such mixed stress and strain conditions including bending and shearing effects can occur in deep-drawing processes of complex car body parts as well as subsequent forming operations like flanging. But changes in formability cannot be described by using the conventional FLC. Hence, for purpose of improvement of failure prediction in numerical simulation codes significant failure criteria for these strain conditions are missing. Considering such aspects in defining suitable failure criteria which is easy to implement into FEA a new semi-empirical model has been developed considering the effect of bending and shearing in sheet metals formability. This failure criterion consists of the combination of the so called cFLC (combined Forming Limit Curve), which considers superposed bending load conditions and the SFLC (Shear Forming Limit Curve), which again includes the effect of shearing on sheet metal's formability.
Rothe, Jeanne Marie
1983-01-01T23:59:59.000Z
A COMPUTER SIMULATION MODEL FOR THE PREDICTION OF . EMPERATURE DISTRIBUTIONS IN RADIOFREQUENCY HYPERTHERMIA TREATMENT A Thesis by JEANNE MARIE ROTHE Submitted to the Graduate College of Texas ASM University in Partial fulfillment... of the requirement for the degree of MASTER OF SCIENCE DECEMBER 1983 Major Subject: Bioengineering A COMPUTER SIMULATION MODEL FOR THE PREDICTION OF TEMPERATURE DISTRIBUTIONS IN RADIOFREQUENCY HYPERTHERMIA TREATMENT A Thesis by JEANNE MARIE ROTHE Approved...
A new, efficient computational model for the prediction of fluid seal flowfields
Hibbs, Robert Irwin
1988-01-01T23:59:59.000Z
A NEW) EFFICIENT COMPUTATIONAL MODEL FOR THE PREDICTION OF FLUID SEAL FLOWFIELDS A Thesis by ROBERT IRWIN HIBBS, JR. Submitted to the Office of Graduate Studies of Texas ASM University in partial fulfillment of the requirement for the degree... of MASTER OF SCIENCE December 1988 Major Subject: Mechanical Engineering A NEW, EFFICIENT COMPUTATIONAL MODEL FOR THE PREDICTION OF FLUID SEAL FLOWFIELDS A Thesis by ROBERT IRWIN HIBBS, JR. Approved as to style and content by: David L. Rhode...
Computational Models for Image Guided, Robot-Assisted and Simulated Medical Interventions
Paris-Sud XI, Université de
their potential use in a number of advanced medical applications including image guided, robot and force feedback. Such procedures require the use of advanced medical image analysis methods and have brought many advances in several medical fields includ- ing computer-aided diagnosis, therapy
Interactive Off-Line Computer Modeling for Powerhouse Operations
Delk, S. R.; Jones, W. G.
1982-01-01T23:59:59.000Z
OIL?R 1-5 FUEL 501L?R 6& 7 FUEL GAS TURI ItIE FUEL TURBI~E GENERATOR NO.1 TUUINI: CENERATOR NO.2 TUUIN! GENEIl..ATOR NO.3 GAS TURIl"E CEM. NO.4 PURCKAS!D EucntC ITY Figure 3: Computer Input Data 420 ESL-IE-82-04-84 Proceedings from the Fourth...
Three-dimensional electromagnetic modeling and inversion on massively parallel computers
Newman, G.A.; Alumbaugh, D.L. [Sandia National Labs., Albuquerque, NM (United States). Geophysics Dept.
1996-03-01T23:59:59.000Z
This report has demonstrated techniques that can be used to construct solutions to the 3-D electromagnetic inverse problem using full wave equation modeling. To this point great progress has been made in developing an inverse solution using the method of conjugate gradients which employs a 3-D finite difference solver to construct model sensitivities and predicted data. The forward modeling code has been developed to incorporate absorbing boundary conditions for high frequency solutions (radar), as well as complex electrical properties, including electrical conductivity, dielectric permittivity and magnetic permeability. In addition both forward and inverse codes have been ported to a massively parallel computer architecture which allows for more realistic solutions that can be achieved with serial machines. While the inversion code has been demonstrated on field data collected at the Richmond field site, techniques for appraising the quality of the reconstructions still need to be developed. Here it is suggested that rather than employing direct matrix inversion to construct the model covariance matrix which would be impossible because of the size of the problem, one can linearize about the 3-D model achieved in the inverse and use Monte-Carlo simulations to construct it. Using these appraisal and construction tools, it is now necessary to demonstrate 3-D inversion for a variety of EM data sets that span the frequency range from induction sounding to radar: below 100 kHz to 100 MHz. Appraised 3-D images of the earth`s electrical properties can provide researchers opportunities to infer the flow paths, flow rates and perhaps the chemistry of fluids in geologic mediums. It also offers a means to study the frequency dependence behavior of the properties in situ. This is of significant relevance to the Department of Energy, paramount to characterizing and monitoring of environmental waste sites and oil and gas exploration.
Vassilis Geroyannis; Georgios Kleftogiannis
2014-06-14T23:59:59.000Z
We revisit the problem of radial pulsations of neutron stars by computing four general-relativistic polytropic models, in which "density" and "adiabatic index" are involved with their discrete meanings: (i) "rest-mass density" or (ii) "mass-energy density" regarding the density, and (i) "constant" or (ii) "variable" regarding the adiabatic index. Considering the resulting four discrete combinations, we construct corresponding models and compute for each model the frequencies of the lowest three radial modes. Comparisons with previous results are made. The deviations of respective frequencies of the resolved models seem to exhibit a systematic behavior, an issue discussed here in detail.
New partnership uses advanced computer science modeling to address...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
to address the most challenging and demanding climate change issues. Accelerated Climate Modeling for Energy, or ACME, is designed to accelerate the development and application...
advanced computational model: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
. . . . 18 3.4.1 Heat Exchanger - Code description . . . . . . . . . . . . . . . 18 3.4.2 Simulation ResultsADVANCED POWER PLANT MODELING WITH APPLICATIONS TO THE ADVANCED BOILING...
advanced computational modeling: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
. . . . 18 3.4.1 Heat Exchanger - Code description . . . . . . . . . . . . . . . 18 3.4.2 Simulation ResultsADVANCED POWER PLANT MODELING WITH APPLICATIONS TO THE ADVANCED BOILING...
adolescents computer modelling: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
in more realistic implementations. This model has two free parameters: the adiabatic evolution parameter s and the alpha parameter which emulates many-variables...
WUFI COMPUTER MODELING WORKSHOP FOR WALL DESIGN AND PERFORMANCE
Oak Ridge National Laboratory
, building forensic specialists, manufacturer representatives, facilities managers, IAQ specialists of modeling for new products are demonstrated by both group and individual interaction. · You will learn how
Reversible computation as a model for the quantum measurement process
Karl Svozil
2009-04-15T23:59:59.000Z
One-to-one reversible automata are introduced. Their applicability to a modelling of the quantum mechanical measurement process is discussed.
External-Memory Computational Geometry
Goodrich, Michael T.; Tsay, Jyh-Jong; Vengroff, Darren Erik; Vitter, Jeffrey Scott
1993-01-01T23:59:59.000Z
the rst known optimal al- gorithms for a wide range of two-level and hierarchical multilevel memory models, including parallel models. The algorithms are optimal both in terms of I/O cost and internal computation....
Computational Model of Film Editing for Interactive Storytelling
Boyer, Edmond
. Generating interactive narratives as movies requires knowl- edge in cinematography (camera placement, framing. Keywords: Camera planning, Virtual Cinematography 1 Introduction In interactive storytelling, it is useful with the rules of cinematography and editing, including shot composition, continuity editing and pacing
Goodarz Ahmadi
2002-07-01T23:59:59.000Z
In this project, a computational modeling approach for analyzing flow and ash transport and deposition in filter vessels was developed. An Eulerian-Lagrangian formulation for studying hot-gas filtration process was established. The approach uses an Eulerian analysis of gas flows in the filter vessel, and makes use of the Lagrangian trajectory analysis for the particle transport and deposition. Particular attention was given to the Siemens-Westinghouse filter vessel at Power System Development Facility in Wilsonville in Alabama. Details of hot-gas flow in this tangential flow filter vessel are evaluated. The simulation results show that the rapidly rotation flow in the spacing between the shroud and the vessel refractory acts as cyclone that leads to the removal of a large fraction of the larger particles from the gas stream. Several alternate designs for the filter vessel are considered. These include a vessel with a short shroud, a filter vessel with no shroud and a vessel with a deflector plate. The hot-gas flow and particle transport and deposition in various vessels are evaluated. The deposition patterns in various vessels are compared. It is shown that certain filter vessel designs allow for the large particles to remain suspended in the gas stream and to deposit on the filters. The presence of the larger particles in the filter cake leads to lower mechanical strength thus allowing for the back-pulse process to more easily remove the filter cake. A laboratory-scale filter vessel for testing the cold flow condition was designed and fabricated. A laser-based flow visualization technique is used and the gas flow condition in the laboratory-scale vessel was experimental studied. A computer model for the experimental vessel was also developed and the gas flow and particle transport patterns are evaluated.
MATHEMATICAL Mathematical and Computer Modelling 35 (2002) 1365-1370
Gorban, Alexander N.
obstacle is introduced. This model applied to the estimation of the efficiency of free flow turbines allows reserved. Keywords-Cavitation flows, Riabouchinsky model, Kirchhoff method, Pree boundary problems. 1 by the recent progress in the development of free flow turbines [l] for the purpose of estimating
Demonstrating the improvement of predictive maturity of a computational model
Hemez, Francois M [Los Alamos National Laboratory; Unal, Cetin [Los Alamos National Laboratory; Atamturktur, Huriye S [CLEMSON UNIV.
2010-01-01T23:59:59.000Z
We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.
Gaussian Process Modeling and Computation in Engineering Applications
Pourhabib, Arash
2014-07-08T23:59:59.000Z
; and predictive modeling for large datasets. First, we develop a spatial-temporal model for local wind fields in a wind farm with more than 200 wind turbines. Our framework utilizes the correlation among the derivatives of wind speeds to find a neighborhood...
Reliable Computation of Binary Parameters in Activity Coefficient Models
Stadtherr, Mark A.
phase equilibria. The technique is demonstrated with examples using the NRTL and electrolyte-NRTL (eNRTL) models. In two of the NRTL examples, results are found that contradict previous work. In the eNRTL time that a method for parameter estimation in the eNRTL model from binary LLE data (mutual solubility
Huang, Su-Yun
factors · Simulation codes with calibration parameters 8 #12;Example: Designing Cellular Heat Exchangers in Qian et al. (2006, ASME) Related to the autoregressive model in Kennedy and O'Hagan (2000) · x = (x1
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadapInactiveVisiting theCommercialization andComputer Simulations Indicate
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power Administration would like submit theInnovationComputationalEnergy
HIV virus spread and evolution studied through computer modeling
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC) EnvironmentalGyroSolĂ©(tm) Harmonic EngineHIV and evolution studied through computer
When Model Checking Met Deduction Computer Science Laboratory
Clarke, Edmund M.
Park, CA Sep 19, 2014 #12;Alan Turing It is of course important that some efforts be made to verify hold in each case. Alan Turing (quoted by D. MacKenzie in Risk and Reason) N. Shankar Model checking
A Computational Model of How the Basal Ganglia Produce Sequences
Berns, Gregory S.
closely on known anatomy and physiology. First, we assume that the thalamic targets, which relay ascend the external globus pallidus (GPe) and the subthalamic nucleus (STN). As a test of the model, the system
Computational tools for modeling and measuring chromosome structure
Ross, Brian Christopher
2012-01-01T23:59:59.000Z
DNA conformation within cells has many important biological implications, but there are challenges both in modeling DNA due to the need for specialized techniques, and experimentally since tracing out in vivo conformations ...
Scalable computational architecture for integrating biological pathway models
Shiva, V. A
2007-01-01T23:59:59.000Z
A grand challenge of systems biology is to model the cell. The cell is an integrated network of cellular functions. Each cellular function, such as immune response, cell division, metabolism or apoptosis, is defined by an ...
A Method for Computing Conditional Probabilities in Probabilistic Library Model
, . PLM , in-vitro DNA , , PLM , 1% . PLM . 1. (Probabilistic Library Model) DNA [4], .[2,3] in-vitro DNA , PCR dilution .[1] PLM , . , , DNA , DNA . PLM , .[2, 3] , DNA (wDNF) DNA
Computer support to run models of the atmosphere. Final report
Fung, I.
1996-08-30T23:59:59.000Z
This research is focused on a better quantification of the variations in CO{sub 2} exchanges between the atmosphere and biosphere and the factors responsible for these exchangers. The principal approach is to infer the variations in the exchanges from variations in the atmospheric CO{sub 2} distribution. The principal tool involves using a global three-dimensional tracer transport model to advect and convect CO{sub 2} in the atmosphere. The tracer model the authors used was developed at the Goddard institute for Space Studies (GISS) and is derived from the GISS atmospheric general circulation model. A special run of the GCM is made to save high-frequency winds and mixing statistics for the tracer model.
Increasing NOAA's computational capacity to improve global forecast modeling
Hamill, Tom
Systems Division Stephen J. Lord Director, NWS NCEP Environmental Modeling Center 19 July 2010 (303) 4973060 tom.hamill@noaa.gov #12; 2 Executive Summary The accuracy of many
Continuum-based Multiscale Computational Damage Modeling of Cementitous Composites
Kim, Sun-Myung
2011-08-08T23:59:59.000Z
-damage constitutive model, the effect of the micromechanical properties of concrete, such as aggregate shape, distribution, and volume fraction, the ITZ thickness, and the strength of the ITZ and mortar matrix on the iv tensile behavior of concrete... Page 7.1 2-D Meso-scale Analysis Model of Concrete ................................ 103 7.2 Material Properties of the ITZ and Mortar Matrix ......................... 104 7.3 The Effect of the Aggregate Shape...
Internship Contract (Includes Practicum)
Thaxton, Christopher S.
Internship Contract (Includes Practicum) Student's name-mail: _________________________________________ Internship Agency Contact Agency Name: ____________________________________ Address-mail: __________________________________________ Location of Internship, if different from Agency: ________________________________________________ Copies
A Novel Computational Model for Tilting Pad Journal Bearings with Soft Pivot Stiffnesses
Tao, Yujiao 1988-
2012-12-10T23:59:59.000Z
A novel tilting pad journal bearing model including pivot flexibility as well as temporal fluid inertia effects on the thin film fluid flow aims to accurately predict the bearing forced performance. The predictive model also accounts for the thermal...
WUFI COMPUTER MODELING WORKSHOP FOR WALL DESIGN AND PERFORMANCE
Oak Ridge National Laboratory
transport are included, along with the sorptive capacity of building construction materials. WUFI ORNL IN BUILDING ENVELOPES) Chicago, IL, April 10-11, 2012 WUFI/ORNL 1 Program made available by the U.S Department) and co-sponsored by the National Institute of Building Sciences (NIBS)/ Building Enclosure Technology
WUFI COMPUTER MODELING WORKSHOP FOR WALL DESIGN AND PERFORMANCE
Oak Ridge National Laboratory
transport are included, along with the sorptive capacity of building construction materials. WUFI ORNL effects on modern construction. New techniques are shown, and participants are introduced to new material IN BUILDING ENVELOPES) Napa, CA, January 26-27, 2012 WUFI/ORNL1 Program made available by the U.S Department
WUFI COMPUTER MODELING WORKSHOP FOR WALL DESIGN AND PERFORMANCE
Oak Ridge National Laboratory
transport are included, along with the sorptive capacity of building construction materials. WUFI ORNL IN BUILDING ENVELOPES) Tampa, FL, October 4-5, 2012 WUFI/ORNL 1 Program made available by the U.S Department) and co-sponsored by the National Institute of Building Sciences (NIBS)/ Building Enclosure Technology
Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.; McNamara, Laura A.; Trucano, Timothy Guy
2008-09-01T23:59:59.000Z
Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making.
STOCHASTIC COMPUTATIONAL DYNAMICAL MODEL OF UNCERTAIN STRUCTURE COUPLED WITH AN INSULATION LAYER
Boyer, Edmond
STOCHASTIC COMPUTATIONAL DYNAMICAL MODEL OF UNCERTAIN STRUCTURE COUPLED WITH AN INSULATION LAYER the effect of insulation layers in complex dynamical systems for low- and medium-frequency ranges such as car booming noise analysis, one introduces a sim- plified stochastic model of insulation layers based
Hawick, Ken
}, title = {Simulation Modelling and Visualisation: Toolkits for Building Artificial Worlds}, journal0 Computational Science Technical Note CSTN-052 Simulation Modelling and Visualisation: Toolkits for Building Artificial Worlds Daniel Peter Playne and Anton P Gerdelan and Arno Leist and Chris J Scogings
Reyes, Dasia Ann
2009-05-15T23:59:59.000Z
into the PANS models. This study concludes with an investigation of a low Reynolds number correction for the PANS ku !u model which yields excellent iv improvement. v To my mother and father, I could not have done this without you. vi ACKNOWLEDGMENTS I would... . . . . . . . 58 V CONCLUSIONS : : : : : : : : : : : : : : : : : : : : : : : : : : : 63 A. Computational Issues Conclusions . . . . . . . . . . . . . . 63 B. Physical Issues Conclusions . . . . . . . . . . . . . . . . . 64 VI SUMMARY OF RECOMMENDATIONS...
Fluid computation of the performanceenergy trade-off in large scale Markov models
Imperial College, London
Fluid computation of the performanceÂenergy trade-off in large scale Markov models Anton Stefanek energy consumption while maintaining multiple service level agreements. 2. VIRTUALISED EXECUTION MODEL optimisation. We show how the fluid analysis naturally leads to a constrained global optimisation prob- lem
Studying the energy efficiency of large-scale computer systems requires models of the relationship
Rivoire, Suzanne
Abstract Studying the energy efficiency of large-scale computer systems requires models-node clusters using embedded, laptop, desktop, and server processors. These results demonstrate the need usage and power consumption. Therefore, a substantial body of literature models system-level power
Model Discovery for EnergyAware Computing Systems: An Experimental Evaluation
Zadok, Erez
aware systems. Such models are also prerequisites for the appli cation of control theory to energyModel Discovery for EnergyAware Computing Systems: An Experimental Evaluation Appears, is a critical first step in designing advanced controllers that can dynamically man age the energy
A computational model for predicting damage evolution in laminated composite plates
Phillips, Mark Lane
1999-01-01T23:59:59.000Z
computationally tenable is shown herein. Due to the complicated nature of the many cracks and their interactions, a multi-scale micro-meso-local-global methodology is employed in order to model damage modes. Interface degradation is first modeled analytically...
A Unified RANS-LES Model: Computational Development, Accuracy and Cost1 Harish Gopalana
Heinz, Stefan
-Stokes (RANS) methods, applies modeling assumptions to all the scales of motion. The17 use of LES methodsA Unified RANS-LES Model: Computational Development, Accuracy and Cost1 Harish Gopalana , Stefan Heinzb, , Michael K. Stöllingera 2 aMechanical Engineering Department, University of Wyoming, 1000 E
Modelling propagation of sinkhole, in both slow and dynamic modes, using the UDEC computer code.
Paris-Sud XI, UniversitĂ© de
Modelling propagation of sinkhole, in both slow and dynamic modes, using the UDEC computer code RISques) : Adresse* : Ecole des mines de Nancy, Parc de Saurupt, 54042 Nancy-Cedex, France ; Adresse sinkhole forms and to propose a prediction model. The UDEC code is used. An actual case of sinkhole
Computational modeling of thermal conductivity of single walled carbon nanotube polymer composites
Maruyama, Shigeo
was developed to study the thermal conductivity of single walled carbon nanotube (SWNT)-polymer composites1 Computational modeling of thermal conductivity of single walled carbon nanotube polymer resistance on effective conductivity of composites were quantified. The present model is a useful tool
Expressing and computing passage time measures of GSPN models with HASL
Paris-Sud XI, UniversitĂ© de
Expressing and computing passage time measures of GSPN models with HASL Elvio Gilberto Amparore1 measures in (Tagged) GSPNs using the Hybrid Automata Stochastic Logic (HASL) and the statistical model), formally express them in HASL terms and assess them by means of simulation in the COSMOS tool. The interest
Computational model to evaluate port wine stain depth profiling using pulsed photothermal radiometry
Choi, Bernard
Computational model to evaluate port wine stain depth profiling using pulsed photothermal-thermal model to evaluate the use of pulsed photothermal radiometry (PPTR) for depth profiling of port wine the desired effect. A diagnostic measurement of the distribution of laser energy deposition and ensuing
An efficient computational model for macroscale simulations of moving contact lines
Boyer, Edmond
with CO2, for example). A major challenge in numerical simulations of moving contact linesAn efficient computational model for macroscale simulations of moving contact lines Y. Sui1 simulation of moving contact lines. The main purpose is to formulate and test a model wherein the macroscale
Asymptotical Computations for a Model of Flow in Saturated Porous Media
WeinmĂĽller, Ewa B.
a variably saturated porous medium with exponential diffusivity, such as soil, rock or concrete is given by uAsymptotical Computations for a Model of Flow in Saturated Porous Media P. Amodio a , C.J. Budd b for an implicit second order ordinary differential equation which arises in models of flow in saturated porous
A Three-Dimensional Computational Model of PEM Fuel Cell with Serpentine Gas Channels
Victoria, University of
A Three-Dimensional Computational Model of PEM Fuel Cell with Serpentine Gas Channels by Phong) fuel cell with serpentine gas flow channels is presented in this thesis. This comprehensive model accounts for important transport phenomena in a fuel cell such as heat transfer, mass transfer, electrode
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power Administration would like submit theInnovationComputational Biology2If yousimulation of
Benioff, P.
1980-01-01T23:59:59.000Z
A microscopic quantum mechanical model of computers as represented by Turing machines is constructed. It is shown that for each number N and Turing machine Q there exists a Hamiltonian H/sub N//sup Q/ and a class of appropriate initial states such that, if PSI/sub Q//sup N/(0) is such an initial state, then PSI/sub Q//sup N/(t) = exp(-iH/sub N//sup Q/t) PSI/sub Q//sup N/(0) correctly describes at times t/sub 3/, t/sub 6/,..., t/sub 3N/ model states that correspond to the completion of the first, second,..., Nth computation step of Q. The model parameters can be adjusted so that for an arbitrary time interval ..delta.. around t/sub 3/, t/sub 6/,..., t/sub 3N/, the machine part of PSI/sub Q//sup N/(t) is stationary. 1 figure.
A computer music instrumentarium
Oliver La Rosa, Jaime Eduardo
2011-01-01T23:59:59.000Z
Chapter 6. COMPUTERS: To Solder or Not toMusic Models : A Computer Music Instrumentarium . . . . .Interactive Computer Systems . . . . . . . . . . . . . . 101
Pump apparatus including deconsolidator
Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew
2014-10-07T23:59:59.000Z
A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.
Computational Modeling of Conventionally Reinforced Concrete Coupling Beams
Shastri, Ajay Seshadri
2012-02-14T23:59:59.000Z
.20. Stress Distribution Showing the Formation of the Compression Strut.........99 Fig. 6.1. Section and Reinforcement Details for Specimen NR4 (Bristowe 2000)...101 Fig. 6.2. Test Setup for the Coupling Beams (Bristowe 2000... research on reinforced concrete coupling beams. Various types of failures observed in coupling beam tests are discussed in this section including the following: 14 ? Shear compression (SC): This failure is usually seen in conventionally...
MATHEMATICAL Mathematical and Computer Modelling 35 (2002) 743-749
Nenadic, Zoran
. Senseman and Robbins [2,3] supported this hypothesis. They used voltage sensitive dye methods to show 750 cells from different cortical layers. Our model captures the basic geometry and temporal structure and are best characterized. These are two types of pyramidal cells (the lateral and medial pyramidal cells
Computational Modelling of Particle Degradation in Dilute Phase Pneumatic Conveyors
Christakis, Nikolaos
the calculation of degradation propensity is coupled with a flow model of the solids and gas phases in the pipeline. Numerical results are presented for degradation of granulated sugar in an industrial scale handling, because of the change in particle properties such as particle size distribution, shape and
International Conference "Computational Modeling and Simulation of Materials" Sicily 2004
Webb, Roger P.
how the deformation of a silicon surface caused by a high energy C60 impact can eject a large cages. But also the use of C60 ions in secondary mass spectrometry (SIMS) as a probing beam is showing in collaboration with the University of Karlsruhe, the simulation models have been verified for both low energy
A Computational Framework for Modelling Aneurysm Inception due
Growth Project Thesis November 23, 2010 Matthias Kirchhart Thorolf Schulte Florian Lubisch Michael Woopen is used. The artery is modelled to consist of elastin and collagen fibres, arranged in double helical.1.2 Displacement, Velocity and Substantial Derivative . . . . . . . . . . . . 19 2.1.3 Deformation Gradient
Computational and physical models of RNA structure Ralf Bundschuh
Bundschuh, Ralf
Partition function Definition The partition function of an RNA molecule with energy function E[S] is given;Molten RNA Energy model Energetics in molten phase Definition In the molten phase of RNA every base can;Boltzmann partition function Secondary structure Definition of RNA secondary structure Definition An RNA
CASE FOR SUPPORT: Computational Modeling of Salience Sensitive Control
Heinke, Dietmar
in neural network modeling, machine learning, adaptive systems in general and self-organising systems] and verification of real-time systems [6]. A large amount of this research has been performed using the CADP verification environment, which is one of the most powerful tool suites available, boasting a spectrum
Computational modeling of biological cells and soft tissues
Unnikrishnan, Ginu U.
2009-05-15T23:59:59.000Z
derived material properties of cells have found to vary by orders of magnitude even for the same cell type. The primary cause of such disparity is attributed to the stimulation process, and the theoretical models used to interpret the experimental data...
Computational Fuel Cell Research and SOFC Modeling at Penn State
multidisciplinary research on fuel cells and advanced batteries for vehicle propulsion, distributed power generation science, multiphase transport, reactive flow, CFD modeling, experimental diagnostics, in- vehicle testing, DMFC, and SOFC #12;ECEC Facilities (>5,000 sq ft) Fuel Cell/Battery Experimental Labs Fuel Cell
Model construction: elements of a computational mechanism Jan M. _Zytkow
Ras, Zbigniew W.
Academy of Sciences, Warsaw, Poland zytkow@uncc.edu Abstract Model construction is one of the key scienti of the mainsteps. As a body of mass m rolls down its kinetic energy grows from zero to mv2=2, where v is the nal velocity. At the same time, its potential energy decreases from gmh to zero, where g is Earth acceleration
NORTHWESTERN UNIVERSITY FDTD Computational Electromagnetics Modeling of Microcavity
Sheridan, Jennifer
for quick low-cost feasibility studies and allow for design optimization before devices are fabricated Susan C. Hagness Recent advances in materials technology and fabrication techniques have made analysis. Towards these goals, an algorithm for modeling frequency-dependent optical gain media
Computational Fluid Dynamics Modeling of the John Day Dam Tailrace
Rakowski, Cynthia L.; Perkins, William A.; Richmond, Marshall C.; Serkowski, John A.
2010-07-08T23:59:59.000Z
US Army Corps of Engineers - Portland District required that a two-dimensional (2D) depth-averaged and a three-dimensional (3D) free-surface numerical models to be developed and validated for the John Day tailrace. These models were used to assess potential impact of a select group of structural and operational alternatives to tailrace flows aimed at improving fish survival at John Day Dam. The 2D model was used for the initial assessment of the alternatives in conjunction with a reduced-scale physical model of the John Day Project. A finer resolution 3D model was used to more accurately model the details of flow in the stilling basin and near-project tailrace hydraulics. Three-dimensional model results were used as input to the Pacific Northwest National Laboratory particle tracking software, and particle paths and times to pass a downstream cross section were used to assess the relative differences in travel times resulting from project operations and structural scenarios for multiple total river flows. Streamlines and neutrally-buoyant particles were seeded in all turbine and spill bays with flows. For a Total River of 250 kcfs running with the Fish Passage Plan spill pattern and a spillwall, the mean residence times for all particles were little changed; however the tails of the distribution were truncated for both spillway and powerhouse release points, and, for the powerhouse releases, reduced the residence time for 75% of the particles to pass a downstream cross section from 45.5 minutes to 41.3 minutes. For a total river of 125 kcfs configured with the operations from the Fish Passage Plan for the temporary spillway weirs and for a proposed spillwall, the neutrally-buoyant particle tracking data showed that the river with a spillwall in place had the overall mean residence time increase; however, the residence time for 75% of the powerhouse-released particles to pass a downstream cross section was reduced from 102.4 min to 89 minutes.
bioenergetics models were expanded to the population- level and dynamically coupled to the lower trophic levels (LTL) of the NEMURO model. The individual fish bioenergetics model and the one-way coupling to NEMURO (i.e. NEMURO is run first and output is used to force the fish bioenergetics model) are described
and free surface models and a global heat transfer model, with moving boundaries. An axisymmetric fluid to determine flow field, after the phase boundaries have been determined, by the heat transfer model. A finite field, from which temperature gradients are determined. The heat transfer model is furthermore expanded
Spycher, N.; Oldenburg, C.M.
2014-01-01T23:59:59.000Z
This study uses modeling and simulation approaches to investigate the impacts on injectivity of trace amounts of mercury (Hg) in a carbon dioxide (CO{sub 2}) stream injected for geologic carbon sequestration in a sandstone reservoir at ~2.5 km depth. At the range of Hg concentrations expected (7-190 ppbV, or ~ 0.06-1.6 mg/std.m{sup 3}CO{sub 2}), the total volumetric plugging that could occur due to complete condensation of Hg, or due to complete precipitation of Hg as cinnabar, results in a very small porosity change. In addition, Hg concentration much higher than the concentrations considered here would be required for Hg condensation to even occur. Concentration of aqueous Hg by water evaporation into CO{sub 2} is also unlikely because the higher volatility of Hg relative to H{sub 2}O at reservoir conditions prevents the Hg concentration from increasing in groundwater as dry CO{sub 2} sweeps through, volatilizing both H{sub 2}O and Hg. Using a model-derived aqueous solution to represent the formation water, batch reactive geochemical modeling show that the reaction of the formation water with the CO{sub 2}-Hg mixture causes the pH to drop to about 4.7 and then become buffered near 5.2 upon reaction with the sediments, with a negligible net volume change from mineral dissolution and precipitation. Cinnabar (HgS(s)) is found to be thermodynamically stable as soon as the Hg-bearing CO{sub 2} reacts with the formation water which contains small amounts of dissolved sulfide. Liquid mercury (Hg(l)) is not found to be thermodynamically stable at any point during the simulation. Two-dimensional radial reactive transport simulations of CO{sub 2} injection at a rate of 14.8 kg/s into a 400 m-thick formation at isothermal conditions of 106°C and average pressure near 215 bar, with varying amounts of Hg and H{sub 2}S trace gases, show generally that porosity changes only by about ±0.05% (absolute, i.e., new porosity = initial porosity ±0.0005) with Hg predicted to readily precipitate from the CO{sub 2} as cinnabar in a zone mostly matching the single-phase CO{sub 2} plume. The precipitation of minerals other than cinnabar, however, dominates the evolution of porosity. Main reactions include the replacement of primarily Fe-chlorite by siderite, of calcite by dolomite, and of K-feldspar by muscovite. Chalcedony is also predicted to precipitate from the dissolution of feldspars and quartz. Although the range of predicted porosity change is quite small, the amount of dissolution and precipitation predicted for these individual minerals is not negligible. These reactive transport simulations assume that Hg gas behaves ideally. To examine effects of non-ideality on these simulations, approximate calculations of the fugacity coefficient of Hg in CO{sub 2} were made. Results suggest that Hg condensation could be significantly overestimated when assuming ideal gas behavior, making our simulation results conservative with respect to impacts on injectivity. The effect of pressure on Henry’s constant for Hg is estimated to yield Hg solubilities about 10% lower than when this effect is not considered, a change that is considered too small to affect the conclusions of this report. Although all results in this study are based on relatively mature data and modeling approaches, in the absence of experimental data and more detailed site-specific information, it is not possible to fully validate the results and conclusions.
Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A
2011-11-04T23:59:59.000Z
The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related computational changes on general modeling codes in fusion energy.
Living Expenses (includes approximately
Maroncelli, Mark
& engineering programs All other programs Graduate: MBA/INFSY at Erie & Harrisburg (12 credits) Business Guarantee 3 (Does not include Dependents Costs4 ) Altoona, Berks, Erie, and Harrisburg 12-Month Estimated
Computational models for the berry phase in semiconductor quantum dots
Prabhakar, S., E-mail: rmelnik@wlu.ca; Melnik, R. V. N., E-mail: rmelnik@wlu.ca [M2NeT Lab, Wilfrid Laurier University, 75 University Ave W, Waterloo, ON N2L 3C5 (Canada); Sebetci, A. [Department of Mechanical Engineering, Mevlana University, 42003, Konya (Turkey)
2014-10-06T23:59:59.000Z
By developing a new model and its finite element implementation, we analyze the Berry phase low-dimensional semiconductor nanostructures, focusing on quantum dots (QDs). In particular, we solve the Schrödinger equation and investigate the evolution of the spin dynamics during the adiabatic transport of the QDs in the 2D plane along circular trajectory. Based on this study, we reveal that the Berry phase is highly sensitive to the Rashba and Dresselhaus spin-orbit lengths.
Verification of a VRF Heat Pump Computer Model in EnergyPlus
Nigusse, Bereket; Raustad, Richard
2013-06-01T23:59:59.000Z
This paper provides verification results of the EnergyPlus variable refrigerant flow (VRF) heat pump computer model using manufacturer's performance data. The paper provides an overview of the VRF model, presents the verification methodology, and discusses the results. The verification provides quantitative comparison of full and part-load performance to manufacturer's data in cooling-only and heating-only modes of operation. The VRF heat pump computer model uses dual range bi-quadratic performance curves to represent capacity and Energy Input Ratio (EIR) as a function of indoor and outdoor air temperatures, and dual range quadratic performance curves as a function of part-load-ratio for modeling part-load performance. These performance curves are generated directly from manufacturer's published performance data. The verification compared the simulation output directly to manufacturer's performance data, and found that the dual range equation fit VRF heat pump computer model predicts the manufacturer's performance data very well over a wide range of indoor and outdoor temperatures and part-load conditions. The predicted capacity and electric power deviations are comparbale to equation-fit HVAC computer models commonly used for packaged and split unitary HVAC equipment.
Ahmadi, G.
1994-06-01T23:59:59.000Z
In the period of December 1, 1993 to February 28, 1994, considerable progress in the experimental study of monogranular layer simple shear flow device was made. Experimental data concerning the mean granular velocity, fluctuation velocity and solid volume fraction were obtained. The resulting data revealed new interesting features of particulate flows. The thermodynamically consistent, rate dependent model for turbulent two-phase flows and its application to analysis of simple shear flow was completed. Further progress on the application of the kinetic model for rapid flows of granular materials including the frictional energy losses were made. The flow over a vibrating plate was analyzed and the velocity, the fluctuation energy and the solid volume fraction profiles were evaluated. A computational model for analyzing flow of granular materials in ducts and passages with bumpy walls was developed. The special cases of Couette and chute flows was analyzed. The results were compared with the experimental data and good agreement was observed. Flows of gas-solid mixtures in vertical ducts were studied. A computational model for analyzing two-phase flow was developed, and the phasic mean velocity and fluctuation energy profiles were evaluated.
Computational Human Performance Modeling For Alarm System Design
Jacques Hugo
2012-07-01T23:59:59.000Z
The introduction of new technologies like adaptive automation systems and advanced alarms processing and presentation techniques in nuclear power plants is already having an impact on the safety and effectiveness of plant operations and also the role of the control room operator. This impact is expected to escalate dramatically as more and more nuclear power utilities embark on upgrade projects in order to extend the lifetime of their plants. One of the most visible impacts in control rooms will be the need to replace aging alarm systems. Because most of these alarm systems use obsolete technologies, the methods, techniques and tools that were used to design the previous generation of alarm system designs are no longer effective and need to be updated. The same applies to the need to analyze and redefine operators’ alarm handling tasks. In the past, methods for analyzing human tasks and workload have relied on crude, paper-based methods that often lacked traceability. New approaches are needed to allow analysts to model and represent the new concepts of alarm operation and human-system interaction. State-of-the-art task simulation tools are now available that offer a cost-effective and efficient method for examining the effect of operator performance in different conditions and operational scenarios. A discrete event simulation system was used by human factors researchers at the Idaho National Laboratory to develop a generic alarm handling model to examine the effect of operator performance with simulated modern alarm system. It allowed analysts to evaluate alarm generation patterns as well as critical task times and human workload predicted by the system.
A Bayesian Approach for Parameter Estimation and Prediction using a Computationally Intensive Model
Dave Higdon; Jordan D. McDonnell; Nicolas Schunck; Jason Sarich; Stefan M. Wild
2014-09-17T23:59:59.000Z
Bayesian methods have been very successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model $\\eta(\\theta)$ where $\\theta$ denotes the uncertain, best input setting. Hence the statistical model is of the form $y = \\eta(\\theta) + \\epsilon$, where $\\epsilon$ accounts for measurement, and possibly other error sources. When non-linearity is present in $\\eta(\\cdot)$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and non-standard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. While quite generally applicable, MCMC requires thousands, or even millions of evaluations of the physics model $\\eta(\\cdot)$. This is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory (DFT) model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory (ANL).
The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report
Diachin, L F; Garaizar, F X; Henson, V E; Pope, G
2009-10-12T23:59:59.000Z
In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.
Rector, D.R.; Wheeler, C.L.; Lombardo, N.J.
1986-11-01T23:59:59.000Z
COBRA-SFS (Spent Fuel Storage) is a general thermal-hydraulic analysis computer code used to predict temperatures and velocities in a wide variety of systems. The code was refined and specialized for spent fuel storage system analyses for the US Department of Energy's Commercial Spent Fuel Management Program. The finite-volume equations governing mass, momentum, and energy conservation are written for an incompressible, single-phase fluid. The flow equations model a wide range of conditions including natural circulation. The energy equations include the effects of solid and fluid conduction, natural convection, and thermal radiation. The COBRA-SFS code is structured to perform both steady-state and transient calculations: however, the transient capability has not yet been validated. This volume describes the finite-volume equations and the method used to solve these equations. It is directed toward the user who is interested in gaining a more complete understanding of these methods.
Zhou, Shujia; Duffy, Daniel; Clune, Thomas; Suarez, Max; Williams, Samuel; Halem, Milton
2009-01-10T23:59:59.000Z
The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratio of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.
Simulating the Transverse Ising Model on a Quantum Computer: Error Correction with the Surface Code
Hao You; Michael R. Geller; P. C. Stancil
2013-03-29T23:59:59.000Z
We estimate the resource requirements for the quantum simulation of the ground state energy of the one dimensional quantum transverse Ising model (TIM), based on the surface code implementation of a fault tolerant quantum computer. The surface code approach has one of the highest known tolerable error rates (1%) which makes it currently one of the most practical quantum computing schemes. Compared to results of the same model using the concatenated Steane code, the current results indicate that the simulation time is comparable but the number of physical qubits for the surface code is 2-3 orders of magnitude larger than that of the concatenation code. Considering that the error threshold requirements of the surface code is four orders of magnitude higher than the concatenation code, building a quantum computer with a surface code implementation appears more promising given current physical hardware capabilities.
Paris-Sud XI, Université de
Computing combustion noise by combining Large Eddy Simulations with analytical models +++++ Presented by Ignacio Duran Abstract Two mechanisms control combustion noise generation as shown by Marble. A method to calculate combustion-generated noise has been implemented in a tool called CHORUS. The method
Broader source: Energy.gov [DOE]
The objective of this Funding Opportunity Announcement (FOA) is to leverage scientific advancements in mathematics and computation for application to power system models and software tools, with the long-term goal of enabling real-time protection and control based on wide-area sensor measurements.
Computational Modeling and the Experimental Plasma Research Program A White Paper Submitted of the fusion energy program. The experimental plasma research (EPR) program is well positioned to make major in fusion development and promote scientific discovery. Experimental plasma research projects explore
0018-9162/00/$10.00 2000 IEEE2 Computer A Staged Model for
change during a system's life cycle. Manny Lehman3 documented the inevitability of the evolution stage and late stages of the life cycle.5 Initial development During initial development, engineers build0018-9162/00/$10.00 © 2000 IEEE2 Computer A Staged Model for the Software Life Cycle S oftware
In-Vehicle Testing and Computer Modeling of Electric Vehicle Batteries
Wang, Chao-Yang
In-Vehicle Testing and Computer Modeling of Electric Vehicle Batteries B. Thomas, W.B. Gu, J was performed for both VRLA and NiMH batteries using Penn State University's electric vehicle, the Electric Lion and hybrid-electric vehicles. A thorough understanding of battery systems from the point of view
.534 kN to 5.34 kN. In worst- case tests representing a complete lack of superior femoral head bonePre-clinical evaluation of ceramic femoral head resurfacing prostheses using computational models in resurfacing hip replacement (RHR) have been reported as early femoral neck fracture, infection, and loosening
Medical Nuclear Supply Chain Design: A Tractable Network Model and Computational Approach
Nagurney, Anna
Medical Nuclear Supply Chain Design: A Tractable Network Model and Computational Approach Anna of medical nuclear supply chains. Our focus is on the molybdenum supply chain, which is the most commonly is of special relevance to healthcare given the medical nuclear product's widespread use as well as the aging
AN ADVANCED COMPUTATIONAL APPROACH TO SYSTEM MODELING OF TOKAMAK POWER PLANTS Zoran Dragojlovic1
Najmabadi, Farrokh
AN ADVANCED COMPUTATIONAL APPROACH TO SYSTEM MODELING OF TOKAMAK POWER PLANTS Zoran Dragojlovic1 power plant system studies is being developed for the ARIES program. An operational design space has power plants. This allows examination of a multi-dimensional trade space as opposed to traditional
Building ventilation : a pressure airflow model computer generation and elements of
Paris-Sud XI, Université de
Building ventilation : a pressure airflow model computer generation and elements of validation H when heating a residential building, approximately 30% of the energy loss is due to air renewal[1. Thus in tropical climates, natural ventilation affects essentially the inside comfort by favouring
COMPUTATIONAL CHALLENGES IN THE NUMERICAL TREATMENT OF LARGE AIR POLLUTION MODELS
Dimov, Ivan
COMPUTATIONAL CHALLENGES IN THE NUMERICAL TREATMENT OF LARGE AIR POLLUTION MODELS I. DIMOV , K. GEORGIEVy, TZ. OSTROMSKY , R. J. VAN DER PASz, AND Z. ZLATEVx Abstract. The air pollution, and especially the reduction of the air pollution to some acceptable levels, is an important environmental problem, which
Utero-fetal unit and pregnant woman modeling using a computer graphics approach for
Boubekeur, Tamy
Utero-fetal unit and pregnant woman modeling using a computer graphics approach for dosimetry for fetuses during pregnancy. Human fetus exposure can only be assessed through simulated dosimetry studies performed in vivo on animals and in vitro at the cellular level are complemented by simulated dosimetry
A Computational Model of Aging and Calcification in the Aortic Heart Valve
Mofrad, Mohammad R. K.
A Computational Model of Aging and Calcification in the Aortic Heart Valve Eli J. Weinberg1 of America Abstract The aortic heart valve undergoes geometric and mechanical changes over time. The cusps of a normal, healthy valve thicken and become less extensible over time. In the disease calcific aortic
S5 S5 S5 lacks the finite model property Dept. of Computer Science
Kurucz, Agi
S5 × S5 × S5 lacks the finite model property A. Kurucz Dept. of Computer Science King's College London Abstract It follows from algebraic results of Maddux that every multi-modal logic L such that [S5, S5, . . . , S5] L S5n is undecidable, whenever n 3. This implies that the product logic S5×S5×S5
Teaching canal hydraulics and control using a computer game or a scale model canal
Paris-Sud XI, Université de
systems with automatic control algorithms. Modernization can also improve the quality of service to water irrigation canals are now designed and built using modern technologies allowing advanced control proceduresTeaching canal hydraulics and control using a computer game or a scale model canal Pierre
A versatile computer model for the design and analysis of electric and hybrid vehicles
Stevens, Kenneth Michael
1996-01-01T23:59:59.000Z
The primary purpose of the work reported in this thesis was to develop a versatile computer model to facilitate the design and analysis of hybrid vehicle drive-trains. A hybrid vehicle is one in which power for propulsion comes from two distinct...
Chen, Qingyan "Yan"
1 Experimental Validation of a Computational Fluid Dynamics Model for IAQ applications in Ice Rink, USA, Fax: 617-432-4122, Abstract Many ice rink arenas have ice resurfacing equipment that uses fossil temperature distributions in ice rinks. The numerical results agree reasonably with the corresponding
can be mitigated by using dye-sensitized solar cells (DSSCs),4 which use organic dye molecules coated by nearly an order of magnitude through plasmon enhanced absorption by the dye.10 This particular solar cellComputational Modeling of Plasmon-Enhanced Light Absorption in a Multicomponent Dye Sensitized
Many Task Computing for Modeling the Fate of Oil Discharged from the Deep Water Horizon Well
tons of crude oil into the Gulf of Mexico. In order to understand the fate and impact of the discharged, causing the riser pipe to rupture and crude oil to flow into the Gulf of Mexico from an approximate depthMany Task Computing for Modeling the Fate of Oil Discharged from the Deep Water Horizon Well
A Simulation Technique for Performance Analysis of Generic Petri Net Models of Computer Systems1
Cintra, Marcelo
A Simulation Technique for Performance Analysis of Generic Petri Net Models of Computer Systems1 Abstract Many timed extensions for Petri nets have been proposed in the literature, but their analytical solutions impose limitations on the time distributions and the net topology. To overcome these limitations
Bayesian Emulation of Complex Multi-Output and Dynamic Computer Models
Oakley, Jeremy
Bayesian Emulation of Complex Multi-Output and Dynamic Computer Models Stefano Conti Anthony O the case). In particular, standard Monte Carlo-based methods of sensitivity analysis (extensively reviewed'Hagan, 2002), offering substantial efficiency gains over standard Monte Carlo-based meth- ods. These authors
MATHEMATICAL PERGAMON Mathematical and Computer Modelling 35 (2002) 1371-1375
Gorban, Alexander N.
2002-01-01T23:59:59.000Z
Application to the Efficiency of Free Flow Turbines A. GORBAN' Institute of Computational Modeling, Russian obstacle is considered. Its application to estimating the efficiency of free flow turbines is discussed hydraulic turbines, i.e., the turbines that work without dams [l]. For this kind of turbine, the term
A versatile computer model for the design and analysis of electric and hybrid vehicles
Stevens, Kenneth Michael
1996-01-01T23:59:59.000Z
The primary purpose of the work reported in this thesis was to develop a versatile computer model to facilitate the design and analysis of hybrid vehicle drive-trains. A hybrid vehicle is one in which power for propulsion comes from two distinct...
Computable General Equilibrium Models for the Analysis of Energy and Climate Policies
Wing, Ian Sue
Computable General Equilibrium Models for the Analysis of Energy and Climate Policies Ian Sue Wing of energy and environmental policies. Perhaps the most important of these applications is the analysis Change, MIT Prepared for the International Handbook of Energy Economics Abstract This chapter is a simple
Computational Modeling of Electrolyte/Cathode Interfaces in Proton Exchange Membrane Fuel Cells
Bjřrnstad, Ottar Nordal
Computational Modeling of Electrolyte/Cathode Interfaces in Proton Exchange Membrane Fuel Cells Dr Proton exchange membrane fuel cells (PEMFCs) are alternative energy conversion devices that efficiently. The fundamental relationship between operating conditions and device performance will help to optimize the device
Calheiros, Rodrigo N.
CloudAnalyst: A CloudSim-based Visual Modeller for Analysing Cloud Computing Environments and Applications Bhathiya Wickremasinghe1 , Rodrigo N. Calheiros2 , and Rajkumar Buyya1 1 The Cloud Computing and Distributed Systems (CLOUDS) Laboratory Department of Computer Science and Software Engineering The University
Unit physics testing of a mix model in an eulerian fluid computation
Vold, Erik [Los Alamos National Laboratory; Douglass, Rod [Los Alamos National Laboratory
2010-01-01T23:59:59.000Z
A K-L turbulence mix model driven with a drag-buoyancy source term is tested in an Eulerian code in a series of basic unit-physics tests, as part of a mix validation milestone. The model and the closure coefficient values are derived in the work of Dimonte-Tipton [D-T] in Phys.Flu.18, 085101 (2006), and many of the test problems were reported there, where the mix model operated in Lagrange computations. The drag-buoyancy K-L mix model was implemented within the Eulerian code framework by A.J. Scannapieco. Mix model performance is evaluated in terms of mix width growth rates compared to experiments in select regimes. Results in our Eulerian code are presented for several unit-physics I-D test problems including the decay of homogeneous isotropic turbulence (HIT), Rayleigh-Taylor (RT) unstable mixing, shock amplification of initial turbulence, Richtmyer-Meshkov (RM) mixing in several single shock test cases and in comparison to two RM experiments including re-shock (Vetter-Sturtevant and Poggi, et.al.). Sensitivity to model parameters, to Atwood number, and to initial conditions are examined. Results here are in good agreement in some tests (HIT, RT) with the previous results reported for the mix model in the Lagrange calculations. The HIT turbulent decay agrees closely with analytic expectations, and the RT growth rate matches experimental values for the default values of the model coefficients proposed in [D-T]. Results for RM characterized with a power law growth rate differ from the previous mix model work but are still within the range for reasonable agreement with experiments. Sensitivity to IC values in the RM studies are examined; results are sensitive to initial values of L[t=O], which largely determines the RM mix layer growth rate, and generally differs from the IC values used in the RT studies. Result sensitivity to initial turbulence, K[t=O], is seen to be small but significant above a threshold value. Initial conditions can be adjusted so that single shock RM mix width results match experiments but we have not been able to obtain a good match for first shock and re-shock growth rates in the same experiment with a single set of parameters and Ie. Problematic issues with KH test problems are described. Resolution studies for an RM test problem show the K-L mix growth rate decreases as it converges at a supra-linear rate, and, convergence requires a fine grid (on the order of 10 microns). For comparison, a resolution study of a second mix model [Scannapieco and Cheng, Phys.Lett.A, 299(1),49, (2002)] acting on a two fluid interface problem was examined. The mix in this case was found to increase with grid resolution at low to moderate resolutions, but converged at comparably fine resolutions. In conclusion, these tests indicate that the Eulerian code K-L model, using the Dimonte Tipton default model closure coefficients, achieve reasonable results across many of the unit-physics experimental conditions. However, we were unable to obtain good matches simultaneously for shock and re-shock mix in a single experiment. Results are sensitive to initial conditions in the regimes under study, with different IC best suited to RT or RM mix. It is reasonable to expect IC sensitivity in extrapolating to high energy density regimes, or to experiments with deceleration due to arbitrary combinations of RT and RM. As a final comparison, the atomically generated mix fraction and the mix width were each compared for the K-L mix model and the Scannapieco model on an identical RM test problem. The Scannapieco mix fraction and width grow linearly. The K-L mix fraction and width grow with the same power law exponent, in contrast to expectations from analysis. In future work it is proposed to do more head-to-head comparisons between these two models and other mix model options on a full suite of physics test problems, such as interfacial deceleration due to pressure build-up during an idealized ICF implosion.
Technical Review of the CENWP Computational Fluid Dynamics Model of the John Day Dam Forebay
Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.
2010-12-01T23:59:59.000Z
The US Army Corps of Engineers Portland District (CENWP) has developed a computational fluid dynamics (CFD) model of the John Day forebay on the Columbia River to aid in the development and design of alternatives to improve juvenile salmon passage at the John Day Project. At the request of CENWP, Pacific Northwest National Laboratory (PNNL) Hydrology Group has conducted a technical review of CENWP's CFD model run in CFD solver software, STAR-CD. PNNL has extensive experience developing and applying 3D CFD models run in STAR-CD for Columbia River hydroelectric projects. The John Day forebay model developed by CENWP is adequately configured and validated. The model is ready for use simulating forebay hydraulics for structural and operational alternatives. The approach and method are sound, however CENWP has identified some improvements that need to be made for future models and for modifications to this existing model.
High Performance Computing Modeling Advances Accelerator Science for High Energy Physics
Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis
2014-04-29T23:59:59.000Z
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).
High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Amundson, James [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Macridin, Alexandru [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Spentzouris, Panagiotis [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States)
2014-11-01T23:59:59.000Z
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).
High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis
2014-11-01T23:59:59.000Z
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation packagemore »capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).« less
Three-dimensional computer modeling of particulate flow around dust monitors
Nichols, B.D.; Gregory, W.S.
1987-01-01T23:59:59.000Z
SOLA-DM is a three-dimensional finite-difference computer code designed to model the dynamics of an incompressible fluid and the transport of discrete particulate material around obstacles impervious to flow. The numerical methods used in this code are described. SOLA-DM was used to predict the particle flux sampled by the 10-mm Dorr-Oliver Cyclone and MINIRAM dust monitors. Various geometric and dynamic variations of monitor and airflow combinations were tested. The code predictions are shown in computer-generated graphic plots.
Simons, Jack
Chapter 19 Corrections to the mean-field model are needed to describe the instantaneous Coulombic-Fock (UHF) theory in which each spin-orbital i has its own orbital energy i and LCAO-MO coefficients C flexible than the single-determinant HF procedure are needed. In particular, it may be necessary to use
Li, Yun-He; Zhang, Xin
2014-01-01T23:59:59.000Z
Dark energy can modify the dynamics of dark matter if there exists a direct interaction between them. Thus a measurement of the structure growth, e.g., redshift-space distortions (RSD), can be a powerful tool to constrain the interacting dark energy (IDE) models. For the widely studied $Q=3\\beta H\\rho_{de}$ model, previous works showed that only a very small coupling ($\\beta\\sim\\mathcal{O}(10^{-3})$) can survive in current RSD data. However, all these analyses have to assume $w>-1$ and $\\beta>0$ due to the existence of the large-scale instability in the IDE scenario. In our recent work [Phys.\\ Rev.\\ D {\\bf 90}, 063005 (2014)], we successfully solved this large-scale instability problem by establishing a parametrized post-Friedmann (PPF) framework for the IDE scenario. So we, for the first time, have the ability to explore the full parameter space of the IDE models. In this work, we reexamine the observational constraints on the $Q=3\\beta H\\rho_{de}$ model within the PPF framework. By using the Planck data, th...
Victoria, University of
On the Use of Computational Models for Wave Climate Assessment in Support of the Wave Energy On the Use of Computational Models for Wave Climate Assessment in Support of the Wave Energy Industry Effective, economic extraction of ocean wave energy requires an intimate under- standing of the ocean wave
Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d'%C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)
2011-06-01T23:59:59.000Z
This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the experienced user-base and the experimental validation base was decaying away quickly.
DualTrust: A Trust Management Model for Swarm-Based Autonomic Computing Systems
Maiden, Wendy M.
2010-05-01T23:59:59.000Z
Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm -- their lightweight, ephemeral nature and indirect communication -- make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarm-based autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarm-based autonomic computing system and describes a trust model that meets these requirements.
Superior model for fault tolerance computation in designing nano-sized circuit systems
Singh, N. S. S., E-mail: narinderjit@petronas.com.my; Muthuvalu, M. S., E-mail: msmuthuvalu@gmail.com [Fundamental and Applied Sciences Department, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, Perak (Malaysia); Asirvadam, V. S., E-mail: vijanth-sagayan@petronas.com.my [Electrical and Electronics Engineering Department, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, Perak (Malaysia)
2014-10-24T23:59:59.000Z
As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalization of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.
Use of model calibration to achieve high accuracy in analysis of computer networks
Frogner, Bjorn; Guarro, Sergio; Scharf, Guy
2004-05-11T23:59:59.000Z
A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.
A Computational Model based on Gross' Emotion Regulation Theory1 Tibor Bosse (tbosse@few.vu.nl)
Treur, Jan
A Computational Model based on Gross' Emotion Regulation Theory1 Tibor Bosse (tbosse for emotion regulation by formalizing the model informally described by Gross (1998). The model has been of emotional response) and qualitative aspects (such as decisions to regulate one's emotion). This model
Bandy, P.J.; Hall, L.F.
1993-03-01T23:59:59.000Z
This report presents information on computer codes for numerical and analytical models that have been used at the Idaho National Engineering Laboratory (INEL) to model ground water and surface water flow and contaminant transport. Organizations conducting modeling at the INEL include: EG G Idaho, Inc., US Geological Survey, and Westinghouse Idaho Nuclear Company. Information concerning computer codes included in this report are: agency responsible for the modeling effort, name of the computer code, proprietor of the code (copyright holder or original author), validation and verification studies, applications of the model at INEL, the prime user of the model, computer code description, computing environment requirements, and documentation and references for the computer code.
Parametric Studies and Optimization of Eddy Current Techniques through Computer Modeling
Todorov, E. I. [EWI, Engineering and NDE, 1250 Arthur E. Adams Dr., Columbus, OH 43221-3585 (United States)
2007-03-21T23:59:59.000Z
The paper demonstrates the use of computer models for parametric studies and optimization of surface and subsurface eddy current techniques. The study with high-frequency probe investigates the effect of eddy current frequency and probe shape on the detectability of flaws in the steel substrate. The low-frequency sliding probe study addresses the effect of conductivity between the fastener and the hole, frequency and coil separation distance on detectability of flaws in subsurface layers.
A floating-point processor for the Texas Instruments model 980A computer
Brinkmann, Hubert Eldie
1977-01-01T23:59:59.000Z
OF SCIENCE May 1977 Major Subject: Flectrical Engineering A FLOATING-POINT PROCESSOR FOR THE TEXAS INSTRUMENTS MODEL 980A COMPUTER A Thesis by HUBERT ELDIE BRINKMANN, JR. Approved as to style and content by: C airman o. Committee) Hea , Depar ent... part of the subtrahend has been two's complemented. Floating-Point Multiplication After the characteristic and mantissa have been separated, t?o characteristics of the two numbers are added and the mantissas are multiplied to initiate...
Computer simulation and modeling; you've got quite a task before you.
Vonessen, Nikolaus
/577 Computer Simulation & Modeling Contents 1 Instructor Information 2 Online Course Tools 3 Textbooks 4.1 Grading Breakdown 7.2 Assessment 7.2.1 CS477 7.2.2 CS577 7.2.2.1 CS 577 Final Project Milestones 7.2.3 Co-convening courses 7.3 Other Issues Related to Grades 7.3.1 Flexibility of Grading Breakdown 7.3.2 Pass Fail 7
NREL Computer Models Integrate Wind Turbines with Floating Platforms (Fact Sheet)
Not Available
2011-07-01T23:59:59.000Z
Far off the shores of energy-hungry coastal cities, powerful winds blow over the open ocean, where the water is too deep for today's seabed-mounted offshore wind turbines. For the United States to tap into these vast offshore wind energy resources, wind turbines must be mounted on floating platforms to be cost effective. Researchers at the National Renewable Energy Laboratory (NREL) are supporting that development with computer models that allow detailed analyses of such floating wind turbines.
Increasing the chemical content of turbulent flame models through the use of parallel computing
Yam, C.G.; Armstrong, R.; Koszykowski, M.L. [Sandia National Labs., Livermore, CA (United States); Chen, J.Y. [California Univ., Berkeley, CA (United States); Bui-Pham, M.N. [Lawrence Berkeley National Lab., CA (United States)
1996-10-01T23:59:59.000Z
This report outlines the effort to model a time-dependent, 2- dimensional, turbulent, nonpremixed flame with full chemistry with the aid of parallel computing tools. In this study, the mixing process and the chemical reactions occurring in the flow field are described in terms of the single-point probability density function (PDF), while the turbulent viscosity is determined by the standard kappa-epsilon model. The initial problem solved is a H[sub 2]/Air flame whose chemistry is described by 28 elementary reactions involving 9 chemical species.
A.V.G. Chizmeshya; M.J. McKelvy; G.H. Wolf; R.W. Carpenter; D.A. Gormley; J.R. Diefenbacher; R. Marzke
2006-03-01T23:59:59.000Z
Fossil fuels currently provide 85% of the world's energy needs, with the majority coming from coal, due to its low cost, wide availability, and high energy content. The extensive use of coal-fired power assumes that the resulting CO2 emissions can be vented to the atmosphere. However, exponentially increasing atmospheric CO2 levels have brought this assumption under critical review. Over the last decade, this discussion has evolved from whether exponentially increasing anthropogenic CO2 emissions will adversely affect the global environment, to the timing and magnitude of their impact. A variety of sequestration technologies are being explored to mitigate CO2 emissions. These technologies must be both environmentally benign and economically viable. Mineral carbonation is an attractive candidate technology as it disposes of CO2 as geologically stable, environmentally benign mineral carbonates, clearly satisfying the first criteria. The primary challenge for mineral carbonation is cost-competitive process development. CO2 mineral sequestration--the conversion of stationary-source CO2 emissions into mineral carbonates (e.g., magnesium and calcium carbonate, MgCO3 and CaCO3)--has recently emerged as one of the most promising sequestration options, providing permanent CO2 disposal, rather than storage. In this approach a magnesium-bearing feedstock mineral (typically serpentine or olivine; available in vast quantities globally) is specially processed and allowed to react with CO2 under controlled conditions. This produces a mineral carbonate which (1) is environmentally benign, (2) already exists in nature in quantities far exceeding those that could result from carbonating the world's known fossil fuel reserves, and (3) is stable on a geological time scale. Minimizing the process cost via optimization of the reaction rate and degree of completion is the remaining challenge. As members of the DOE/NETL managed National Mineral Sequestration Working Group we have already significantly improved our understanding of mineral carbonation. Group members at the Albany Research Center have recently shown that carbonation of olivine and serpentine, which naturally occurs over geological time (i.e., 100,000s of years), can be accelerated to near completion in hours. Further process refinement will require a synergetic science/engineering approach that emphasizes simultaneous investigation of both thermodynamic processes and the detailed microscopic, atomic-level mechanisms that govern carbonation kinetics. Our previously funded Phase I Innovative Concepts project demonstrated the value of advanced quantum-mechanical modeling as a complementary tool in bridging important gaps in our understanding of the atomic/molecular structure and reaction mechanisms that govern CO2 mineral sequestration reaction processes for the model Mg-rich lamellar hydroxide feedstock material Mg(OH)2. In the present simulation project, improved techniques and more efficient computational schemes have allowed us to expand and augment these capabilities and explore more complex Mg-rich, lamellar hydroxide-based feedstock materials, including the serpentine-based minerals. These feedstock materials are being actively investigated due to their wide availability, and low-cost CO2 mineral sequestration potential. Cutting-edge first principles quantum chemical, computational solid-state and materials simulation methodology studies proposed herein, have been strategically integrated with our new DOE supported (ASU-Argonne National Laboratory) project to investigate the mechanisms that govern mineral feedstock heat-treatment and aqueous/fluid-phase serpentine mineral carbonation in situ. This unified, synergetic theoretical and experimental approach has provided a deeper understanding of the key reaction mechanisms than either individual approach can alone. We used ab initio techniques to significantly advance our understanding of atomic-level processes at the solid/solution interface by elucidating the origin of vibrational, electronic, x-ray and electron energy loss sp
Unit physics performance of a mix model in Eulerian fluid computations
Vold, Erik [Los Alamos National Laboratory; Douglass, Rod [Los Alamos National Laboratory
2011-01-25T23:59:59.000Z
In this report, we evaluate the performance of a K-L drag-buoyancy mix model, described in a reference study by Dimonte-Tipton [1] hereafter denoted as [D-T]. The model was implemented in an Eulerian multi-material AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (Rayleigh-Taylor) and RM (Richtmyer-Meshkov) experiments, and the present results are compared to experiments and to results reported in [D-T]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shock-driven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [D-T]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and re-shock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.
Towle, J.N. (Diversified EM, Seattle, WA (US)); Prabhakara, F.S. (Power Technologies, Inc., Schenectady, NY (US)); Ponder, J.Z. (PJM Interconnection, Norristown, PA (US))
1992-07-01T23:59:59.000Z
This paper describes an ionospheric source current model and development of an earth resistivity model used to calculate geomagnetic induced currents (GIC) on the Pennsylvania-New Jersey-Maryland Interconnection (PJM). Ionospheric current is modelled as a gaussian distributed current sheet above the earth. Geological details are included by dividing the PJM service area into 11 different earth resistivity regions. The resulting earth surface potential (ESP) at each power system substation is then calculated. A companion paper describes how this ESP is applied to the power system model to calculate the geomagnetic induced current in the power system equipment and facilities.
Computational fluid dynamics modeling of coal gasification in a pressurized spout-fluid bed
Zhongyi Deng; Rui Xiao; Baosheng Jin; He Huang; Laihong Shen; Qilei Song; Qianjun Li [Southeast University, Nanjing (China). Key Laboratory of Clean Coal Power Generation and Combustion Technology of Ministry of Education
2008-05-15T23:59:59.000Z
Computational fluid dynamics (CFD) modeling, which has recently proven to be an effective means of analysis and optimization of energy-conversion processes, has been extended to coal gasification in this paper. A 3D mathematical model has been developed to simulate the coal gasification process in a pressurized spout-fluid bed. This CFD model is composed of gas-solid hydrodynamics, coal pyrolysis, char gasification, and gas phase reaction submodels. The rates of heterogeneous reactions are determined by combining Arrhenius rate and diffusion rate. The homogeneous reactions of gas phase can be treated as secondary reactions. A comparison of the calculated and experimental data shows that most gasification performance parameters can be predicted accurately. This good agreement indicates that CFD modeling can be used for complex fluidized beds coal gasification processes. 37 refs., 7 figs., 5 tabs.
Data-Driven Optimization for Modeling in Computer Graphics and Vision
Yu, Lap Fai
2013-01-01T23:59:59.000Z
61 Clothing in Computer Graphics . . . . . . . . . . . . . .16 The Computer Graphics Perspective . . . . . .161 viii L IST OF F IGURES The 3D computer graphics
Quantum Analogical Modeling: A General Quantum Computing Algorithm for Predicting Language Behavior
Royal Skousen
2005-10-18T23:59:59.000Z
This paper proposes a general quantum algorithm that can be applied to any classical computer program. Each computational step is written using reversible operators, but the operators remain classical in that the qubits take on values of only zero and one. This classical restriction on the quantum states allows the copying of qubits, a necessary requirement for doing general classical computation. Parallel processing of the quantum algorithm proceeds because of the superpositioning of qubits, the only aspect of the algorithm that is strictly quantum mechanical. Measurement of the system collapses the superposition, leaving only one state that can be observed. In most instances, the loss of information as a result of measurement would be unacceptable. But the linguistically motivated theory of Analogical Modeling (AM) proposes that the probabilistic nature of language behavior can be accurately modeled in terms of the simultaneous analysis of all possible contexts (referred to as supracontexts) providing one selects a single supracontext from those supracontexts that are homogeneous in behavior (namely, supracontexts that allow no increase in uncertainty). The amplitude for each homogeneous supracontext is proportional to its frequency of occurrence, with the result that the probability of selecting one particular supracontext to predict the behavior of the system is proportional to the square of its frequency.
none,
1982-04-01T23:59:59.000Z
The Los Alamos National Laboratory is conducting rock fragmentation research in oil shale to develop the blasting technologies and designs required to create a rubble bed for a modified in situ retort. This report outlines our first field experiments at the Anvil Points Mine in Colorado. These experiments are part of a research program, sponsored by the Laboratory through the Department of Energy and by a Consortium of oil companies. Also included are some typical numerical calculations made in support of proposed field experiments. Two papers detail our progress in computer modeling and theory. The first presents a method for eliminating hourglassing in two-dimensional finite-difference calculations of rock fracture without altering the physical results. The second discusses the significant effect of buoyancy on tracer gas flow through the retort. A paper on retort stability details a computer application of the Schmidt graphical method for calculating fine-scale temperature gradients in a retort wall. The final paper, which describes our approach to field experiments, presents the instrumentation and diagnostic techniques used in rock fragmentation experiments at Anvil Points Mine.
Performance Modeling for 3D Visualization in a Heterogeneous Computing Environment
Bowman, Ian; Shalf, John; Ma, Kwan-Liu; Bethel, Wes
2004-06-30T23:59:59.000Z
The visualization of large, remotely located data sets necessitates the development of a distributed computing pipeline in order to reduce the data, in stages, to a manageable size. The required baseline infrastructure for launching such a distributed pipeline is becoming available, but few services support even marginally optimal resource selection and partitioning of the data analysis workflow. We explore a methodology for building a model of overall application performance using a composition of the analytic models of individual components that comprise the pipeline. The analytic models are shown to be accurate on a testbed of distributed heterogeneous systems. The prediction methodology will form the foundation of a more robust resource management service for future Grid-based visualization applications.
Dr. Chenn Zhou
2008-10-15T23:59:59.000Z
Pulverized coal injection (PCI) into the blast furnace (BF) has been recognized as an effective way to decrease the coke and total energy consumption along with minimization of environmental impacts. However, increasing the amount of coal injected into the BF is currently limited by the lack of knowledge of some issues related to the process. It is therefore important to understand the complex physical and chemical phenomena in the PCI process. Due to the difficulty in attaining trus BF measurements, Computational fluid dynamics (CFD) modeling has been identified as a useful technology to provide such knowledge. CFD simulation is powerful for providing detailed information on flow properties and performing parametric studies for process design and optimization. In this project, comprehensive 3-D CFD models have been developed to simulate the PCI process under actual furnace conditions. These models provide raceway size and flow property distributions. The results have provided guidance for optimizing the PCI process.
Exact computation of the Maximum Entropy Potential of spiking neural networks models
Cofre, Rodrigo
2014-01-01T23:59:59.000Z
Understanding how stimuli and synaptic connectivity in uence the statistics of spike patterns in neural networks is a central question in computational neuroscience. Maximum Entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. But, in spite of good performance in terms of prediction, the ?tting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuro-mimetic models) provide a probabilistic mapping between stimulus, network architecture and spike patterns in terms of conditional proba- bilities. In this paper we build an exact analytical mapping between neuro-mimetic and Maximum Entropy models.
Supercomputing | Computational Engineering | ORNL
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Computing Computer Science Data Earth Sciences Energy Science Future Technology Knowledge Discovery Materials Mathematics National Security Systems Modeling Engineering...
Zhou, Shujia
2009-01-01T23:59:59.000Z
Acceleration of Numerical Weather Prediction,” ProceedingsComputer Systems for Climate and Weather Models Shujia Zhouprocesses in climate and weather models demands a continual
Gerkmann, Ralf
J. Theis, Computational Modeling in Biology, Institute of Bioinformatics and Systems BiologyTitle: From data analysis to network modeling, with applications in systems biology Author: Fabian at detailed models of the system of interest. Our application focus are biological networks, namely gene
Rakowski, Cynthia L.; Richmond, Marshall C.; Serkowski, John A.
2006-12-01T23:59:59.000Z
A computational fluid dynamics (CFD) model was used in an investigation into the suppression of a surface vortex that forms and the south-most spilling bay at The Dalles Project. The CFD work complemented work at the prototype and the reduced-scale physical models. The CFD model was based on a model developed for other work in the forebay but had additional resolution added near the spillway. Vortex suppression devices (VSDs) were to placed between pier noses and/or in the bulkhead slot of the spillway bays. The simulations in this study showed that placing VSD structures or a combination of structures to suppress the vortex would still result in near-surface flows to be entrained in a vortex near the downstream spillwall. These results were supported by physical model and prototype studies. However, there was a consensus of the fish biologists at the physical model that the fish would most likely move north and if the fish went under the VSD it would immediately exit the forebay through the tainter gate and not get trapped between VSDs or the VSDs and the tainter gate if the VSDs were deep enough.
Energy and agriculture in the Haitian economy: A computable general equilibrium model
Jones, D.W.; Wu, M.T.C.; Das, S.; Cohn, S.M.
1988-02-01T23:59:59.000Z
This report documents a computable general equilibrium (CGE) model of the economy of Haiti, emphasizing energy use in agriculture. CGE models compare favorably with econometric models for developing countries in terms of their ability to take advantage of available data. The model of Haiti contains ten production sectors: manufacturing, services, transportation, electricity, rice, coffee, sugar cane, sugar refining, general agriculture, and fuelwood and charcoal. All production functions use functional forms which permit factor substitution. Consumption is specified for three income categories of consumers and a government sector with a linear expenditure system (LES) of demand equations. The economy exports four categories of products and imports six. Balanced trade and capital accounts are required for equilibrium. Total sectoral allocations of land, labor and capital are constrained to equal the quantities of these inputs in the Haitian economy as of the early 1980s. The model can be used to study the consequences of fiscal and trade policies and sectorally oriented productivity improvement policies. Guidance is offered regarding how to use the model to study economic growth and technological change. Limitations of the mode are also pointed out as well as user strategies which can lessen or work around some of those limitations. 19 refs.
PM-10 Open Fugitive-Dust-Source computer model (for microcomputers). Model-Simulation
Elmore, L.
1990-04-01T23:59:59.000Z
The computer programs in the package are based on the material presented in the document, Control of Open Fugitive Dust Sources, EPA-450/3-88-008. The programs on these diskettes serve two purposes. Their primary purpose is to facilitate the process of data entry, allowing the user not only to enter and verify the data which he/she possesses, but also to access additional data which might not be readily available. The second purpose is to calculate emission rates for the particular source category selected using the data previously entered and verified. Software Description: The program is written in BASIC programming language for implementation on an IBM-PC/AT and compatible machines using DOS.2X or higher operating system. Hard disk with 5 1/4 inch disk drive or two disk drives, wide carriage printer (132-character) or printer capable of printing text in condensed mode required. Text editor or word processing program capable of manipulating ASCII or DOS text files is optional.
Leishear, R.; Poirier, M.; Fowley, M.
2011-05-26T23:59:59.000Z
The Salt Disposition Integration (SDI) portfolio of projects provides the infrastructure within existing Liquid Waste facilities to support the startup and long term operation of the Salt Waste Processing Facility (SWPF). Within SDI, the Blend and Feed Project will equip existing waste tanks in the Tank Farms to serve as Blend Tanks where 300,000-800,000 gallons of salt solution will be blended in 1.3 million gallon tanks and qualified for use as feedstock for SWPF. Blending requires the miscible salt solutions from potentially multiple source tanks per batch to be well mixed without disturbing settled sludge solids that may be present in a Blend Tank. Disturbing solids may be problematic both from a feed quality perspective as well as from a process safety perspective where hydrogen release from the sludge is a potential flammability concern. To develop the necessary technical basis for the design and operation of blending equipment, Savannah River National Laboratory (SRNL) completed scaled blending and transfer pump tests and computational fluid dynamics (CFD) modeling. A 94 inch diameter pilot-scale blending tank, including tank internals such as the blending pump, transfer pump, removable cooling coils, and center column, were used in this research. The test tank represents a 1/10.85 scaled version of an 85 foot diameter, Type IIIA, nuclear waste tank that may be typical of Blend Tanks used in SDI. Specifically, Tank 50 was selected as the tank to be modeled per the SRR, Project Engineering Manager. SRNL blending tests investigated various fixed position, non-rotating, dual nozzle pump designs, including a blending pump model provided by the blend pump vendor, Curtiss Wright (CW). Primary research goals were to assess blending times and to evaluate incipient sludge disturbance for waste tanks. Incipient sludge disturbance was defined by SRR and SRNL as minor blending of settled sludge from the tank bottom into suspension due to blending pump operation, where the sludge level was shown to remain constant. To experimentally model the sludge layer, a very thin, pourable, sludge simulant was conservatively used for all testing. To experimentally model the liquid, supernate layer above the sludge in waste tanks, two salt solution simulants were used, which provided a bounding range of supernate properties. One solution was water (H{sub 2}O + NaOH), and the other was an inhibited, more viscous salt solution. The research performed and data obtained significantly advances the understanding of fluid mechanics, mixing theory and CFD modeling for nuclear waste tanks by benchmarking CFD results to actual experimental data. This research significantly bridges the gap between previous CFD models and actual field experiences in real waste tanks. A finding of the 2009, DOE, Slurry Retrieval, Pipeline Transport and Plugging, and Mixing Workshop was that CFD models were inadequate to assess blending processes in nuclear waste tanks. One recommendation from that Workshop was that a validation, or bench marking program be performed for CFD modeling versus experiment. This research provided experimental data to validate and correct CFD models as they apply to mixing and blending in nuclear waste tanks. Extensive SDI research was a significant step toward bench marking and applying CFD modeling. This research showed that CFD models not only agreed with experiment, but demonstrated that the large variance in actual experimental data accounts for misunderstood discrepancies between CFD models and experiments. Having documented this finding, SRNL was able to provide correction factors to be used with CFD models to statistically bound full scale CFD results. Through the use of pilot scale tests performed for both types of pumps and available engineering literature, SRNL demonstrated how to effectively apply CFD results to salt batch mixing in full scale waste tanks. In other words, CFD models were in error prior to development of experimental correction factors determined during this research, which provided a technique to use CFD models fo
Joseph, Earl C.; Conway, Steve; Dekate, Chirag
2013-09-30T23:59:59.000Z
This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size. ? A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.
Building ventilation: A pressure airflow model computer generation and elements of validation
Boyer, H; Adelard, L; Mara, T A
2012-01-01T23:59:59.000Z
The calculation of airflows is of great importance for detailed building thermal simulation computer codes, these airflows most frequently constituting an important thermal coupling between the building and the outside on one hand, and the different thermal zones on the other. The driving effects of air movement, which are the wind and the thermal buoyancy, are briefly outlined and we look closely at their coupling in the case of buildings, by exploring the difficulties associated with large openings. Some numerical problems tied to the resolving of the non-linear system established are also covered. Part of a detailled simulation software (CODYRUN), the numerical implementation of this airflow model is explained, insisting on data organization and processing allowing the calculation of the airflows. Comparisons are then made between the model results and in one hand analytical expressions and in another and experimental measurements in case of a collective dwelling.
Lee, Jungsul; Choi, Kyungsun [Cell Signaling and BioImaging Laboratory, Department of Bio and Brain Engineering, KAIST, Daejeon 305-701 (Korea, Republic of)] [Cell Signaling and BioImaging Laboratory, Department of Bio and Brain Engineering, KAIST, Daejeon 305-701 (Korea, Republic of); Choi, Chulhee, E-mail: cchoi@kaist.ac.kr [Cell Signaling and BioImaging Laboratory, Department of Bio and Brain Engineering, KAIST, Daejeon 305-701 (Korea, Republic of) [Cell Signaling and BioImaging Laboratory, Department of Bio and Brain Engineering, KAIST, Daejeon 305-701 (Korea, Republic of); Graduate School of Medical Science and Engineering, KAIST, Daejeon 305-701 (Korea, Republic of); KI for Bio Century, KAIST, Daejeon 305-701 (Korea, Republic of)
2010-01-01T23:59:59.000Z
Mutant ubiquitin found in neurodegenerative diseases has been thought to hamper activation of transcription factor nuclear factor-kappa B (NF-{kappa}B) by inhibiting ubiquitin-proteasome system (UPS). It has been reported that ubiquitin also is involved in signal transduction in an UPS-independent manner. We used a modeling and simulation approach to delineate the roles of ubiquitin on NF-{kappa}B activation. Inhibition of proteasome complex increased maximal activation of IKK mainly by decreasing the UPS efficiency. On the contrary, mutant ubiquitin decreased maximal activity of IKK. Computational modeling showed that the inhibition effect of mutant ubiquitin is mainly attributed to decreased activity of UPS-independent function of ubiquitin. Collectively, our results suggest that mutant ubiquitin affects NF-{kappa}B activation in an UPS-independent manner.
The Computational Sciences. Research
Christensen, Dan
The Computational Sciences. Research activities range from the theoretical foundations. The teaching mission of the computational sciences includes almost every student in the University for computational hardware and software. The computational sciences are undergoing explosive growth worldwide
Raboin, P J
1998-01-01T23:59:59.000Z
The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.
Whitton, Mary C.
or action. · Scalability. The system should require a human set-up that is at most sublinear in the number-time rendering system. Figure 1. A view of our 15-million polygon model of a coal-fired power plant. The MMR: Massive Model Rendering System The Challenge Overview. Computer-aided design (CAD) applications
Donna Post Guillen; Tami Grimmett; Anastasia M. Gribik; Steven P. Antal
2010-09-01T23:59:59.000Z
The Hybrid Energy Systems Testing (HYTEST) Laboratory is being established at the Idaho National Laboratory to develop and test hybrid energy systems with the principal objective to safeguard U.S. Energy Security by reducing dependence on foreign petroleum. A central component of the HYTEST is the slurry bubble column reactor (SBCR) in which the gas-to-liquid reactions will be performed to synthesize transportation fuels using the Fischer Tropsch (FT) process. SBCRs are cylindrical vessels in which gaseous reactants (for example, synthesis gas or syngas) is sparged into a slurry of liquid reaction products and finely dispersed catalyst particles. The catalyst particles are suspended in the slurry by the rising gas bubbles and serve to promote the chemical reaction that converts syngas to a spectrum of longer chain hydrocarbon products, which can be upgraded to gasoline, diesel or jet fuel. These SBCRs operate in the churn-turbulent flow regime which is characterized by complex hydrodynamics, coupled with reacting flow chemistry and heat transfer, that effect reactor performance. The purpose of this work is to develop a computational multiphase fluid dynamic (CMFD) model to aid in understanding the physico-chemical processes occurring in the SBCR. Our team is developing a robust methodology to couple reaction kinetics and mass transfer into a four-field model (consisting of the bulk liquid, small bubbles, large bubbles and solid catalyst particles) that includes twelve species: (1) CO reactant, (2) H2 reactant, (3) hydrocarbon product, and (4) H2O product in small bubbles, large bubbles, and the bulk fluid. Properties of the hydrocarbon product were specified by vapor liquid equilibrium calculations. The absorption and kinetic models, specifically changes in species concentrations, have been incorporated into the mass continuity equation. The reaction rate is determined based on the macrokinetic model for a cobalt catalyst developed by Yates and Satterfield [1]. The model includes heat generation due to the exothermic chemical reaction, as well as heat removal from a constant temperature heat exchanger. Results of the CMFD simulations (similar to those shown in Figure 1) will be presented.
Distributed computing systems programme
Duce, D.
1984-01-01T23:59:59.000Z
Publication of this volume coincides with the completion of the U.K. Science and Engineering Research Council's coordinated programme of research in Distributed Computing Systems (DCS) which ran from 1977 to 1984. The volume is based on presentations made at the programme's final conference. The first chapter explains the origins and history of DCS and gives an overview of the programme and its achievements. The remaining sixteen chapters review particular research themes (including imperative and declarative languages, and performance modelling), and describe particular research projects in technical areas including local area networks, design, development and analysis of concurrent systems, parallel algorithm design, functional programming and non-von Neumann computer architectures.
Buyya, Rajkumar
1 Service and Utility Oriented Distributed Computing Systems: Challenges and Opportunities) networks have emerged as popular platforms for the next generation parallel and distributed computing. Utility computing is envisioned to be the next generation of IT evolution that depicts how computing needs
Automatic Interface Generation for Enumerative Model Computer Science Annual Workshop 2006
Pace, Gordon J.
Sandro Spina Dept. of Computer Science and A.I. New Computing Building University of Malta, Malta sandro.spina@um.edu.mt Gordon Pace Dept. of Computer Science and A.I. New Computing Building University of Malta, Malta gordon techniques. CSAW '06 CSAI Department, University of Malta Processes can be described using some formal
DAVENPORT, J.
2006-11-01T23:59:59.000Z
Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.
Computer modeling of arc welds to predict effects of critical variables on weld penetration
Zacharia, T.; David, S.A.
1991-01-01T23:59:59.000Z
In recent years, there have been several attempts to study the effect of critical variables on welding by computational modeling. It is widely recognized that temperature distributions and weld pool shapes are keys to quality weldments. It would be very useful to obtain relevant information about the thermal cycle experienced by the weld metal, the size and shape of the weld pool, and the local solidification rates, temperature distributions in the heat-affected zone (HAZ), and associated phase transformations. The solution of moving boundary problems, such as weld pool fluid flow and heat transfer, that involve melting and/or solidification is inherently difficult because the location of the solid-liquid interface is not known a priori and must be obtained as a part of the solution. Because of non-linearity of the governing equations, exact analytical solutions can be obtained only for a limited number of idealized cases. Therefore, considerable interest has been directed toward the use of numerical methods to obtain time-dependant solutions for theoretical models that describe the welding process. Numerical methods can be employed to predict the transient development of the weld pool as an integral part of the overall heat transfer conditions. The structure of the model allows each phenomenon to be addressed individually, thereby gaining more insight into their competing interactions. 19 refs., 6 figs., 1 tab.
Computer simulations of the restricted primitive model at very low temperature and density
Chantal Valeriani; Philip J. Camp; Jos W. Zwanikken; René van Roij; Marjolein Dijkstra
2010-01-13T23:59:59.000Z
The problem of successfully simulating ionic fluids at low temperature and low density states is well known in the simulation literature: using conventional methods, the system is not able to equilibrate rapidly due to the presence of strongly associated cation-anion pairs. In this manuscript we present a numerical method for speeding up computer simulations of the restricted primitive model (RPM) at low temperatures (around the critical temperature) and at very low densities (down to $10^{-10}\\sigma^{-3}$, where $\\sigma$ is the ion diameter). Experimentally, this regime corresponds to typical concentrations of electrolytes in nonaqueous solvents. As far as we are aware, this is the first time that the RPM has been equilibrated at such extremely low concentrations. More generally, this method could be used to equilibrate other systems that form aggregates at low concentrations.